Xiaolin Huang from Shanghai Jiao Tong University presented a talk at MBZUAI on training deep neural networks in tiny subspaces. The talk covered the low-dimension hypothesis in neural networks and methods to find subspaces for efficient training. It suggests that training in smaller subspaces can improve training efficiency, generalization, and robustness. Why it matters: Investigating efficient training methods is crucial for resource-constrained environments and can enable broader access to advanced AI.
Hassan Sajjad from Dalhousie University presented research on exploring the latent space of AI models to assess their safety and trustworthiness. He discussed use cases where analyzing latent space helps understand the robustness-generalization tradeoff in adversarial training and evaluate language comprehension. Sajjad's work aims to build better AI models and increase trust in their capabilities by looking at model internals. Why it matters: Intrinsic evaluation of model internals will become important to improving AI safety and robustness.
Communications Physics journal has a focus collection on space quantum communications. The collection covers supporting technologies, new quantum protocols, inter-satellite QKD, constellations of satellites, and quantum inspired technologies and protocols for space based communication. Contributions are welcome from October 20, 2020 to April 30, 2021, and accepted papers are published on a rolling basis. Why it matters: Space-based quantum communication is a critical area for developing secure, global quantum networks, and this collection could highlight relevant research for the GCC region as it invests in advanced technologies.