We are currently finalizing speakers for SEMLA 2026. We have already lined up speakers from leading universities, research labs, and industry teams.
Stay tuned for more.

We are currently finalizing speakers for SEMLA 2026. We have already lined up speakers from leading universities, research labs, and industry teams.
Stay tuned for more.









Peter Rigby is a Full Professor of Software Engineering at Concordia University in Montréal and a Software Engineering Researcher at Meta. His research studies how software developers collaborate to build successful systems, with a focus on code review, coordination, empirical software engineering, and the social structure of software teams.
His work is driven by evidence from real development settings and aims to identify practices that make software engineering more reliable, scalable, and effective. For SEMLA 2026, Peter brings deep expertise in the human and organizational side of software engineering, especially as agentic systems change how teams review code, coordinate work, and decide which parts of engineering should remain under human judgment.
TBD
Zhou Yang is an Assistant Professor in the Department of Computing Science at the University of Alberta and an Amii Fellow. His research focuses on trustworthy AI, automated software engineering, software engineering for AI, human-AI interaction, reinforcement learning, and large language models for code.
He completed his PhD at Singapore Management University, earned an MSc in Software Systems Engineering from University College London, and previously worked as a senior research engineer at the Centre for Research on Intelligent Software Engineering. His work examines how code models and AI-assisted development tools can be made more secure, private, efficient, and usable.
TBD
Suhaib Mujahid is a Staff Machine Learning Engineer at Mozilla. His background combines software engineering research with production machine learning systems. Before joining Mozilla, he completed graduate research at Concordia University, where he worked on mining software repositories, release engineering, machine learning on code, defect prediction, software ecosystems, and software architecture.
At Mozilla, his work connects machine learning with open-source software engineering problems, including LLM-assisted QA test planning, performance measurement datasets, and LLM-based review support. His SEMLA perspective is grounded in real open-source infrastructure, large codebases, and production engineering constraints.
TBD
Qiaolin (Isabelle) Qin is a third year PhD student in the Department of Software Engineering at Polytechnique Montreal. Her research focuses on improving Al-based software trustworthiness and security.
Isabelle serves as a reviewer and organizing committee member at top journals and conferences. Her research interest includes software monitoring, Al model explainability, and reverse engineering. At SEMLA 2026, she brings insights into the novel security issues brought by the integration of AI models, and how to address these challenges by enhancing software supply chain transparency.
TBD
Tushar Sharma is an Assistant Professor in the Faculty of Computer Science at Dalhousie University. His research focuses on software design and architecture, refactoring, code quality, technical debt, software maintenance, mining software repositories, and machine learning for software engineering.
Before academia, Tushar worked at Siemens Research in the United States and Siemens Corporate Technology in India. He has written extensively on software design smells, refactoring, and maintainability. His SEMLA fit is strongest around sustainable AI, ML for software engineering, maintainability, and the engineering discipline needed to build reliable AI-driven systems.
Artificial Intelligence (AI) models are energy-hungry. As these models grow larger and become more complex, their energy consumption increases substantially. This leads to higher carbon emissions and rising operational costs, creating serious challenges for sustainable computing. This talk highlights the environmental impact of modern AI systems, introduces key metrics for measuring energy use and carbon footprint, and discusses available tools and frameworks for measuring and monitoring energy consumption. Furthermore, the session summarizes current research and practical techniques for developing greener AI systems, including model pruning, quantization, efficient architecture design, and workload scheduling, and situates these within the wider movement toward responsible and sustainable AI development. The talk concludes by highlighting important open research challenges and calling on the community to treat energy efficiency not as an afterthought but as a first-class objective in the design and evaluation of AI systems.
Nikita Dvornik is an AI researcher affiliated with Borealis AI. His work is in machine learning and computer vision, with emphasis on representation learning, visual understanding, and models that can generalize from limited supervision.
His background in perception and learning systems adds a strong perspective on robustness, generalization, and model behavior under real-world constraints. For SEMLA 2026, his work connects AI research to high-stakes production settings where models need to perform reliably beyond controlled benchmarks.
TBD
Lovedeep Gondara is Head of AI Research and Development at Vanguard and an Adjunct Professor at the University of British Columbia. He is a machine learning researcher whose work spans large and small language models, agentic methods, model grounding, privacy-preserving machine learning, differential privacy, deep learning, decentralized learning, statistics, and multimodal machine learning.
He earned his PhD in Computer Science from Simon Fraser University and has worked on applying machine learning in sensitive domains where reliability, privacy, and governance matter. At SEMLA 2026, Lovedeep brings a regulated-industry perspective on trustworthy AI, responsible deployment, and language-model systems that need to operate under real organizational constraints.
TBD

Vincent Fortier leads Field Engineering for Databricks’ Public Sector business in Canada. His team helps governments, hospitals, and universities move AI and ML workloads from pilot to production, with the constraints of citizen data, PHI, and public accountability baked in.
With 20+ years on the technical side of enterprise data and machine learning, his focus is on what it actually takes to ship ML systems that hold up in the real world: trust, scale, compliance, and the engineering discipline that gets you there.
TBD
Patrice Béchard is an Applied Research Scientist at ServiceNow in Montréal, where he works on AI agents for enterprise automation. His research studies how large language models can reliably interact with complex enterprise systems to execute real-world workflows.
His recent work includes workflow generation, business-process automation, web agents, agent debugging, retrieval-augmented generation, and reducing hallucinations in structured outputs. Patrice fits SEMLA 2026 directly because his work sits at the center of enterprise agents: how they act, how they are evaluated, and how teams can make their outputs reliable enough for production use.
TBD
Gustavo Pinto is an AI Engineer at Zup Innovation and a software engineering researcher with experience across research and industry. His work spans open-source software, human aspects of software engineering, empirical software engineering, mining software repositories, AI agents, foundation models, and machine learning for software engineering.
He has published widely in software engineering venues and works on problems that connect developer productivity with intelligent software tools. At SEMLA 2026, Gustavo brings a practical view of how software teams adopt AI systems, how developer workflows change, and what evidence is needed before organizations trust these tools.
TBD
Orlando E. Marquez is a Lead Applied Research Scientist at ServiceNow with a strong background in software engineering. His research focuses on natural language processing, text-to-text systems, explainability, error analysis, semi-supervised learning, summarization, structured outputs, and text-to-workflow generation.
Orlando has worked on shipping NLP and GenAI systems to enterprise users, including low-code workflow generation and structured outputs from natural-language requirements. His SEMLA contribution is strongest around turning AI research into deployed enterprise systems with careful evaluation, engineering constraints, and user-facing reliability.
TBD