Keynote speakers

The First International Conference on Natural Language Processing and Artificial Intelligence for Cyber Security (NLPAICS’2024) is organised by the Lancaster University UCREL NLP research group.

Iva Gumnishka

Iva Gumnishka is an award-winning social entrepreneur in the field of Responsible AI. She is the founder of Humans in the Loop, a company which employs refugees to provide data labeling and human oversight services to the AI industry. In addition, she is a cofounder of the International Olympiad in AI for high school students, and a board member of For Humanity, an organization promoting the independent audit of AI systems. She has been named Forbes 30under30 and a European Young Leader and is the winner of the EU Prize for Women Innovators, Cartier Women’s Initiative, and MIT SOLVE. She holds a degree in Human Rights from Columbia University.

Red Teaming: Trustworthy AI through diverse human testing

Current generative AI systems demonstrate considerable issues related to bias, diversity, inclusion, and stereotypes, stemming from the data used to train them. This affects the usability and trustworthiness of such systems, especially for users around the world who have to work with systems which underperform in their languages and which misrepresent them. One approach for uncovering such issues and tackling them prior to releasing the system is to use “diversity red teaming”: global teams of testers who perform different attacks on models in order to pinpoint their failure modes with regards to stereotypes and inclusivity. This talk will discuss the concept of red teaming as it relates to such diversity testing, and share considerations on best practices and limitations.

Sevil Şen 

 Sevil Sen is a Professor in the Department of Computer Engineering at Hacettepe University, ranked among the top-5 universities in Turkey. She is the founder and head of the Wireless Networks and Intelligent Secure Systems (WISE) laboratory at Hacettepe University. Dr. Sen obtained her PhD from the University of York, UK in 2011 and has since conducted several research visits to prestigious institutions, including  the Eurecom Institute, France, and the University of York, UK. Her research is primarily focused on the intersection of artificial intelligence and cyber security, with a particular emphasis on developing intelligent secure systems. Her specific interests include malware analysis and intrusion detection. Dr. Sen actively employs artificial intelligence-based approaches to address cyber security challenges in mobile systems, wireless networks, and IoT, striving to innovate solutions for enhancing cyber security in these domains. Throughout her career, Dr. Sen has conducted and consulted on numerous academic and industrial projects related to cyber security. In addition to her academic roles, she serves as an area editor for esteemed journals, including Ad Hoc Networks, Genetic Programming and Evolvable Machines, and Data in Brief.

AI versus AI: The Relentless Cyber Security Arms Race

In the ever-evolving realm of cyber security, attackers and defenders are locked in a relentless arms race. Attackers continuously seek new vulnerabilities, devise innovative attack methods, and find ways to bypass security solutions. In response, defenders continuously update their systems, enhance their attack databases with new signatures, and retrain their AI-based systems. This cyclical battle is perpetual and intensifies with the integration of AI, making it increasingly brutal.

Today, AI empowers defenders, with numerous security solutions leveraging AI to stay ahead of threats. However, the very near future promises a significant increase in the use of AI by attackers, enabling them to automate and scale their malicious activities. In this talk, I will provide an overview of the current use of AI by both attackers and defenders, focusing on malware analysis and detection. I will discuss the current challenges, issues, and potential directions in this ongoing AI-driven cyber security battle.

Paolo Rosso

Paolo Rosso (http://personales.upv.es/prosso/) is Full Professor of Computer Science at the Universitat Politècnica de València, Spain. His current research interests fall mainly in the area of detection of harmful information in social media: disinformation and hate speech, also when conveyed implicitly via stereotypes, irony, sarcasm or humour. He is the principal investigator of the following projects funded by the Spanish Ministry of Science, Innovation and Universities, and by European Union NextGenerationEU/PRTR: XAI-DisInfodemics on eXplainable AI for disinformation and conspiracy detection during infodemics; FAKEnHATE-PdC on FAKE news and HATE speech; and FairTransNLP on Fairness and Transparency for equitable NLP applications in social media. He is also member of the CHIST-ERA MARTINI project on Malicious Actors pRofiling and deTection In online social Network through artificial Intelligence. He collaborated with the Spanish National Security Department and with the Science and Technology Office (Oficina-C) of the Spanish Congress of Deputies in topics related to disinformation campaigns and AI. He is one of the organisers of the PAN shared task on Oppositional thinking analysis: Conspiracy theories vs critical thinking narratives (https://pan.webis.de/clef24/pan24-web/oppositional-thinking-analysis.html).
Provisional title of Paolo Rosso’s talk: Beyond fake news in disinformation detection: analysis of narratives of conspiracy theories

Beyond fake news in disinformation detection: analysis of narratives of conspiracy theories

The rise of social media has offered a fast and easy way for the propagation of disinformation and conspiracy theories. Despite the research attention that has received, disinformation detection remains an open problem and users keep sharing texts that contain false statements. Several are the attempts of foreign information manipulation interference that our countries are exposed to. The European External Action Service (EEAS) and the European Digital Media Observatory (EDMO) are giving special importance to fight and analyse the disinformation narratives of this information warfare. Conspiracy theories are complex narratives that attempt to explain the ultimate causes of significant events as cover plots orchestrated by secret, powerful, and malicious groups. Their automatic detection in written texts has recently gained popularity although the problem is commonly framed as a binary classification task, which can be detailed more thoroughly with fine-grained approaches corresponding to multi-label or multi-class classifications. However, existing approaches do not distinguish between critical and conspiratorial thinking. This distinction has important implications for automatic content moderation: if models do not differentiate between critical and conspiratorial thinking, there is a high risk of pushing people toward conspiracy communities. After covering the above aspects, I will finilise my keynote describing the challenge that we organised at PAN 2024 on discriminating between conspiracy narratives and critical thinking.

Jacques Klein

Dr. Jacques Klein is a full professor in software engineering and software security within the Interdisciplinary Centre for Security, Reliability and Trust (SnT) at the University of Luxembourg. Prof. Klein co-leads a team named TruX of about 25 researchers developing innovative approaches and tools for helping the research and practice communities build trustworthy software. Prof. Klein received a Ph.D. degree in Computer Science from the University of Rennes, France, in 2006. His main areas of expertise are threefold: (1) Software Security, (2) Software Reliability, and (3) Data Analytics. Prof. Klein has published about 200 papers, often in top venues such as ICSE, FSE, ASE, ISSTA, PLDI, AAAI, etc. In addition to academic achievements, Prof. Klein also has a long-standing experience and expertise in successfully running industrial projects with several industrial partners in various domains by applying data analytics, software engineering, information retrieval, etc., to their research problems.

AI for Software Vulnerabilities and Android Malware Detection

Users can today download a wide variety of apps or programs ranging from simple toy games to sophisticated business-critical software. They rely on these software programs daily to perform diverse tasks, some of them related to sensitive information such as their finance or health. Ensuring high-quality, reliable, and secure software is thus key. In the TruX research group of the interdisciplinary center for Security, Reliability, and Trust (SnT) of the University of Luxembourg, we are working to deliver practical techniques, tools, and other artifacts (such as repositories), making the analysis of software possible. In this talk, I will explain how we leverage AI techniques to address two concrete problems: the detection of malicious apps in the Android ecosystem and the detection of software vulnerabilities. In particular, I will highlight our key contributions to code representation, making possible to embed Android apps and code snippets efficiently.
Scroll to Top