If you are ambitious about learning emerging technologies and want to make them reliable, secure, and exciting, then you may choose any of the following projects. Though there is no strict pre-requisite of these projects, however, it will be of great help if you have keen interest in programming languages, software development, Artifical Intelligence, and analytical problem solving (i.e., mathemtical logic-based reasoning) .
Description: The goal of this project is to evaluate and harness LLMs to automatically generate model of a given program that is amenable for checking later.
Description: The goal of this project is to evaluate and harness LLMs to automatically generate invariants of a given program.
Description: The goal of this project is to develop LLM-driven techniques to automatically test security of given application, e.g., PenTesting.
Description: The goal of this project is to develop LLM-driven techniques to automatically infer specification of a given software application.
Description: The goal of this project is to develop LLM-driven techniques to automatically infer tactics and to verify a given software application and its specification.
Description: The goal of this project is to evaluate and harness LLM to detect various security related issues (i.e., code patterns) in software applications that are written in the programming language of your choice. The data for applications can be used from open-source GitHub and other such repositories.
Description: The goal of this project is to develop LLM-driven static or dynamic analyis techniques to identify vulnerabilities in software applications that can cause security or privacy issues.
Description: The goal of this project is introduce refinement or dependent types in the programming language of your choice (e.g., Java, Python, PHP, C, C#) that can specify security related properties or threats and can protect the applications against the specified properties and threats. There are two aspects to the use of such types, one can develop such types for design-time protection or can generate security monitor to protect applications at run-time against the specified attacks.
Description: The goal of this project is to use ARMET to monitor web applications at run-time. The application needs to be modelled in ARMET, which will then monitor the application at run-time to detect known and unknown threats at run-time.
Description: The goal of this project is to develop a translator that translates a given web application into Ur/Web language. The translation will identify those parts that are not protected by design and thus are security risks, which needs to be handled either at design-time or run-time. The UrWeb language is proven to be secure by design against certain classes of security attacks, e.g.,
Description: The goal of this project is to develop risk assessment technique that is capable of detecting known security risks to systems. In fact, we plan to use Jeeves to analyze given APIs of a system implementation and perform formal risk assessment for the API to detect any undesired data flows that may threaten the security of the system. We target APIs written for various machine learning algorithms to test our approach.
Description: The goal of this project is to develop a library of classes of security threats that is amenable for verification. The project involves investigation of various classes of attacks based on their causes and effects. An adequate model of classes will be constructed based on their corresponding cause-effect relations. Then, a program will be annotated with the developed models. Based on the annotations, the tool will translate the annotated program into WhyML that will semi-automatically prove or disprove, if a given program is free from the model classes of threats. Furthermore, the project will result in
The project aims to develop library of threats based on the attacks described in CAPEC repository that is available at https://capec.mitre.org.
Description: The goal of this project is to develop intrusion detection technique that is capable of detecting known and hypothetical threats based on their models. First, a mapping needs to be established between threat models and their corresponding data representation. Based on the representation, the technique will automatically detect modelled threats without any false alarms. Importantly, the models will include some hypothetical threats that will be exploited by the technique at runtime to detect unknown threats. Furthermore, the project will result in
Description: The goal of this project is to evaluate LLMs to automatically check compliance of a software against various governance ordinance and directives by design. First, a program needs to be annotated with governance ordinance rules. Based on the annotations, the tool will translate the annotated program into WhyML that will semi-automatically prove or disprove, if a given program adheres to the annotated governance rules. Furthermore, the project will result in
The project is a motivation from online safety laws in UK and will help to demonstrate such laws. For details, please read the UK law news.
Description: The goal of this project is to evaluate LLMs to automatically check compliance of a software against ethical properties by design. First, a program needs to be annotated with ethical properties. Based on the annotations, the tool will translate the annotated program into WhyML (http://why3.lri.fr) that will semi-automatically prove or disprove, if a given program adheres to the annotated ethics. Furthermore, the project will result in