Technology Innovation Institute's (TII) Directed Energy Research Center (DERC) is integrating machine learning (ML) techniques into signal processing to accelerate research. One project used convolutional neural networks to predict COVID-19 pneumonia from chest x-rays with 97.5% accuracy. DERC researchers also demonstrated that ML-based signal and image processing can retrieve up to 68% of text information from electromagnetic emanations. Why it matters: This adoption of ML for signal processing at TII highlights the potential for advanced AI techniques to enhance research and security applications in the UAE.
Francesco Orabona from Boston University, with a PhD from the University of Genova, researches online learning, optimization, and statistical learning theory. He previously worked at Yahoo Labs and Toyota Technological Institute at Chicago. MBZUAI hosted a panel discussion (topic not specified in provided text). Why it matters: Optimization algorithms are crucial for advancing machine learning and AI, and researchers like Orabona contribute to this field.
MBZUAI researchers will present 20 papers at the 40th International Conference on Machine Learning (ICML) in Honolulu. Visiting Associate Professor Tongliang Liu leads with seven publications, followed by Kun Zhang with six. One paper investigates semi-supervised learning vs. model-based methods for noisy data annotation in deep neural networks. Why it matters: The research addresses the critical issue of data quality and accessibility in machine learning, particularly for organizations with limited resources for data annotation.
MBZUAI Associate Professor Martin Takáč is working on high-performance computing and machine learning with applications in logistics, supply chain management, and other areas. His research focuses on using AI to improve precision and efficiency in tasks like predicting demand and optimizing delivery routes. Takáč's interests include imitative learning, predictive modeling, and reinforcement learning to enable AI to mimic human behavior and predict future outcomes. Why it matters: This research contributes to the development of more efficient and reliable AI systems that can be applied to a wide range of industries in the UAE and beyond.
Qirong Ho, co-founder and CTO of Petuum Inc., will be contributing to the "ML Systems for Many" initiative. Petuum is recognized for creating standardized building blocks for AI assembly. Ho also holds a Ph.D. from Carnegie Mellon University and is part of the CASL open-source consortium. Why it matters: Showcases the ongoing efforts to democratize AI development and deployment, making it more accessible and sustainable, although the specific initiative is not further detailed.
MBZUAI Professor Fakhri Karray and co-authors from the University of Waterloo have published "Elements of Dimensionality Reduction and Manifold Learning," a textbook on methods for extracting useful components from large datasets. The book addresses the challenge of the "curse of dimensionality," where growth in datasets complicates their use in machine learning. Karray developed the material from a popular course he taught at Waterloo. Why it matters: The textbook provides a unified resource for students and researchers in machine learning and AI, addressing a foundational challenge in processing high-dimensional data, relevant to diverse applications in the region.
Agathe Guilloux, a professor in Data Science at Evry Paris Saclay University, presented on machine learning algorithms for precision medicine at MBZUAI. Her talk covered the main challenges of precision medicine and how AI can address them. She also discussed algorithms developed for decision support tools. Why it matters: This highlights MBZUAI's role as a platform for discussing advanced AI applications in healthcare, even when the research is not directly conducted in the GCC.
MBZUAI and KAUST researchers collaborated to present new optimization methods at ICML 2024 for composite and distributed machine learning settings. The study addresses challenges in training large models due to data size and computational power. Their work focuses on minimizing the "loss function" by adjusting internal trainable parameters, using techniques like gradient clipping. Why it matters: This research contributes to the ongoing advancement of machine learning optimization, crucial for improving the performance and efficiency of AI models in the region and globally.