UG/PG Student Project Proposals

If you are ambitious about learning emerging technologies and want to make them reliable, secure, and exciting, then you may choose any of the following projects. Though there is no strict pre-requisite of these projects, however, it will be of great help if you have keen interest in programming languages, software development, Artifical Intelligence, and analytical problem solving (i.e., mathemtical logic-based reasoning) .

 

Project 1: Evaluating and Harnessing LLM to Automatically Generate Program Models

Description: The goal of this project is to evaluate and harness LLMs to automatically generate model of a given program that is amenable for checking later.

Project 2: Evaluating and Harnessing LLM to Automatically Generate Program Invariants

Description: The goal of this project is to evaluate and harness LLMs to automatically generate invariants of a given program.

Project 3: LLM-driven Security Testing of Software Applications

Description: The goal of this project is to develop LLM-driven techniques to automatically test security of given application, e.g., PenTesting.

Project 4: LLM-driven Program Specification

Description: The goal of this project is to develop LLM-driven techniques to automatically infer specification of a given software application.

Project 5: LLM-driven Program Verification

Description: The goal of this project is to develop LLM-driven techniques to automatically infer tactics and to verify a given software application and its specification.

Project 6: Evaluating and Harnessing LLM for security and/or privacy of applications

Description: The goal of this project is to evaluate and harness LLM to detect various security related issues (i.e., code patterns) in software applications that are written in the programming language of your choice. The data for applications can be used from open-source GitHub and other such repositories.

Project 7: LLM-driven Vulnerability analysis of applications for security

Description: The goal of this project is to develop LLM-driven static or dynamic analyis techniques to identify vulnerabilities in software applications that can cause security or privacy issues.

Project 8: Refinement or Dependent types based (design-time and/or run-time) security of applications

Description: The goal of this project is introduce refinement or dependent types in the programming language of your choice (e.g., Java, Python, PHP, C, C#) that can specify security related properties or threats and can protect the applications against the specified properties and threats. There are two aspects to the use of such types, one can develop such types for design-time protection or can generate security monitor to protect applications at run-time against the specified attacks.

Project 9: Run-time security monitoring of web application

Description: The goal of this project is to use ARMET to monitor web applications at run-time. The application needs to be modelled in ARMET, which will then monitor the application at run-time to detect known and unknown threats at run-time.

Project 10: Secure-by-design web application

Description: The goal of this project is to develop a translator that translates a given web application into Ur/Web language. The translation will identify those parts that are not protected by design and thus are security risks, which needs to be handled either at design-time or run-time. The UrWeb language is proven to be secure by design against certain classes of security attacks, e.g., 

  • Suffer from any kinds of code-injection attacks
  • Return invalid HTML
  • Contain dead intra-application links
  • Have mismatches between HTML forms and the fields expected by their handlers
  • Include client-side code that makes incorrect assumptions about the "AJAX"-style services that the remote web server provides
  • Attempt invalid SQL queries
  • Use improper marshaling or unmarshaling in communication with SQL databases or between browsers and web servers.

Project 11: Model based security risk assessment of machine learning applications

Description: The goal of this project is to develop risk assessment technique that is capable of detecting known security risks to systems. In fact, we plan to use Jeeves to analyze given APIs of a system implementation and perform formal risk assessment for the API to detect any undesired data flows that may threaten the security of the system. We target APIs written for various machine learning algorithms to test our approach.


Project 12: Modelling classes of security threats

Description: The goal of this project is to develop a library of classes of security threats that is amenable for verification. The project involves investigation of various classes of attacks based on their causes and effects. An adequate model of classes will be constructed based on their corresponding cause-effect relations. Then, a program will be annotated with the developed models. Based on the annotations, the tool will translate the annotated program into WhyML that will semi-automatically prove or disprove, if a given program is free from the model classes of threats. Furthermore, the project will result in 

  • Library of tactics that are required to prove a program is free from certain security threat classes automatically
  • Library of formalized classes of threats that are amenable for verification.

The project aims to develop library of threats based on the attacks described in CAPEC repository that is available at https://capec.mitre.org.


Project 13: Model based intrusion detection

Description: The goal of this project is to develop intrusion detection technique that is capable of detecting known and hypothetical threats based on their models. First, a mapping needs to be established between threat models and their corresponding data representation. Based on the representation, the technique will automatically detect modelled threats without any false alarms. Importantly, the models will include some hypothetical threats that will be exploited by the technique at runtime to detect unknown threats. Furthermore, the project will result in 

  • Library of formalized known threats that is amenable for verification and
  • Library of formalized hypothetical threats that is amenable for verification.

Project 14: LLM-driven Governance-by-design

Description: The goal of this project is to evaluate LLMs to automatically check compliance of a software against various governance ordinance and directives by design. First, a program needs to be annotated with governance ordinance rules. Based on the annotations, the tool will translate the annotated program into WhyML that will semi-automatically prove or disprove, if a given program adheres to the annotated governance rules. Furthermore, the project will result in 

  • Library of tactics that are required to prove governance laws automatically
  • Library of formalized governance laws that are amenable for verification.

The project is a motivation from online safety laws in UK and will help to demonstrate such laws. For details, please read the UK law news.


Project 15: LLM-driven Ethics-by-design

Description: The goal of this project is to evaluate LLMs to automatically check compliance of a software against ethical properties by design. First, a program needs to be annotated with ethical properties. Based on the annotations, the tool will translate the annotated program into WhyML (http://why3.lri.fr) that will semi-automatically prove or disprove, if a given program adheres to the annotated ethics. Furthermore, the project will result in 

  • Library of tactics that are required to prove ethical properties automatically
  • Library of formalized ethical properties that are amenable for verification.