Researchers at the Rosalind Franklin Institute are using generative AI, including GANs, to augment limited biological datasets, specifically mirtron data from mirtronDB. The synthetic data created mimics real-world samples, facilitating more comprehensive training of machine learning models, leading to improved mirtron identification tools. They also plan to apply Large Language Models (LLMs) to predict unknown patterns in sequence and structure biology problems. Why it matters: This research explores AI techniques to tackle data scarcity in biological research, potentially accelerating discoveries in noncoding RNA and transposable elements.
MBZUAI researchers collaborated with Carnegie Mellon University and the Broad Institute of MIT and Harvard to develop a new statistical method for analyzing data used for gene regulatory network inference. The method addresses the challenge of distinguishing true zero expression values from dropouts in single-cell RNA sequencing data. This research will be presented at the Twelfth International Conference on Learning Representations (ICLR 2024). Why it matters: Improving gene regulatory network inference can lead to better understanding of disease mechanisms and inform the development of new medicines.
MBZUAI faculty Kun Zhang is researching methods to improve the reliability of generative AI, particularly in healthcare applications. Current generative AI models often act as "black boxes," making it difficult to understand why a specific result was produced. Zhang's research focuses on incorporating causal relationships into AI systems to ensure more accurate and meaningful information. Why it matters: Improving the trustworthiness of generative AI is crucial for sensitive sectors like healthcare and ensuring responsible AI deployment across the region.
A KAUST alumnus presented research on using large language models for complex disease modeling and drug discovery. LLMs were trained on insurance claims of 123 million US people to model diseases and predict genetic parameters. Protein language models were developed to discover remote homologs and functional biomolecules, while RNA language models were used for RNA structure prediction and reverse design. Why it matters: This work highlights the potential of LLMs to accelerate computational biology research and drug development, with a KAUST connection.
Daisuke Kihara from Purdue University presented a seminar at MBZUAI on using deep learning for biomolecular structure modeling. His lab is developing 3D structure modeling methods, especially for cryo-electron microscopy (cryo-EM) data. They are also working on RNA structure prediction and peptide docking using deep neural networks inspired by AlphaFold2. Why it matters: Applying advanced deep learning techniques to biomolecular structure prediction can accelerate drug discovery and our understanding of molecular functions.