9

Jan

Unveiling the Power of Multimodal AI

The advancements witnessed last year by the release of ChatGPT represented a turning point in the history of artificial intelligence. From a dynamic open source to development of Multimodal models.   Multimodal AI breaks the traditional data processing in single mode to incorporate multiple input types, like sound text and images. This breakthrough is going to be a huge step towards AI handling diverse sensory information emulating human abilities to do so. Mark Chen, head of frontiers research at OpenAI, emphasized during a November 2023 presentation at EmTech MIT that "the interfaces we encounter in the world are multimodal." He highlighted the aspiration for AI models to observe and interpret visuals and sounds similar to humans, while also generating content that engages multiple senses.     The GPT-4 model developed by OpenAI has multimodal capabilities, allowing it to interpret and respond to both visual and audio inputs. Chen illustrated this by describing a scenario where a user captures photos inside a refrigerator and prompts ChatGPT to recommend a recipe based on the ingredients in the images. This interaction could even involve an audio component if using ChatGPT's voice mode to articulate the request. While many AI initiatives today predominantly focus on text-based models, Matt Barrington, America's emerging technologies leader at EY, emphasized the immense potential of merging text, conversation, images, and video. He highlighted that the real impact lies in integrating all three modalities and applying them across various industries.     Multimodal AI's practical applications span diverse fields. For instance, in healthcare these models can analyze medical images while considering patient history and genetic information, thereby enhancing diagnostic accuracy. Moreover, at a functional level within organizations, multimodal AI can empower employees by providing basic design and coding capabilities to individuals without formal expertise in those domains. Barrington exemplified this by stating, "I've never been skilled at drawing, but now, through AI capabilities like image generation, I can visualize ideas that were previously beyond my artistic ability."     Furthermore, incorporating multimodal capabilities could enhance AI models by exposing them to new datasets. Mark Chen highlighted the importance of providing models with raw inputs from the world, such as video or audio data. This approach aims to enable the models to perceive and draw inferences independently, supplementing their language-based learning limitations.

Share this post


RELATED

Posts