Skip to content

Latest commit

 

History

History
140 lines (85 loc) · 10 KB

what-is-cross-modal-multi-modal.md

File metadata and controls

140 lines (85 loc) · 10 KB

(intro-cm)=

What is Cross-Modal & Multi-Modal?

Jina is the framework for helping you build cross-modal and multi-modal applications on the cloud. But first, what is cross-modal and multi-modal? And what are the applications? This chapter will answer these preliminary questions.

A video version of this chapter is available below.

<iframe width="560" height="315" src="https://www.youtube.com/embed/vxUG0ZVMOp0" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>

Beyond single modality

The term "Modal" is shorthand for "Data Modality". Data modality can be thought of as the "type" of data. For example, a tweet is a modal of type "text"; a photo is a modal of type "image"; a video is a modal of type "video"; etc.

In the early days of AI, research was focused on a single modality, such as vision or language. For example, a spam filter is focused on text modality. A photo classifier is focused on image modality. A music recommender is focused on audio modality. However, it soon became clear that in order to create truly intelligent systems, AI must be able to integrate multiple modalities. In the real world, data is often multimodal, meaning that it consists of multiple modalities. For example, a tweet often contains not only text, but also images, videos, and links. A video often contains not only video frames, but also audio and text (e.g. subtitles). This has led to the development of cross-modality and multi-modality in AI.

Multi-modal machine learning is a relatively new field that is concerned with the development of algorithms that can learn from multiple modalities of data.

Cross-modal machine learning is a subfield of multi-modal machine learning that is concerned with the development of algorithms that can learn from multiple modalities of data that are not necessarily aligned. For example, learning from images and text where the images and text are not necessarily about the same thing.

Thanks to recent advances in deep neural networks, cross-modal or multi-modal technologies enable advanced intelligence on all kinds of unstructured data, such as images, audio, video, PDF, 3D meshes, and more.

Cross-modality and multi-modality are two terms that are often used interchangeably, but there is a big difference between the two. Multi-modality refers to the ability of a system to use multiple modalities, or input channels, to achieve a desired goal. For example, a human can use both sight and hearing to identify a person or object. In contrast, cross-modality refers to the ability of a system to use information from one modality to improve performance in another modality. For example, if you see a picture of a dog, you might be able to identify it by its bark when you hear it.

AI systems that are designed to work with multiple modalities are said to be "multi-modal." However, the term "cross-modality" is more accurate when referring to AI systems that use information from one modality to improve performance in another.

In general, cross-modal and multi-modal technologies allow for a more holistic understanding of data, as well as increased accuracy and efficiency.

Applications

There are many potential applications of cross-modal and multi-modal machine learning. For example, a cross-modal machine learning algorithm could be used to automatically generate descriptions of images (e.g. for blind people). A search system could use a cross-modal machine learning algorithm to search for images by text queries (e.g. "find me a picture of a dog"). A text-to-image generation system could use a cross-modal machine learning algorithm to generate images from text descriptions (e.g. "generate an image of a dog").

Cross-modal AI systems have the potential to greatly improve the performance of AI systems by making them more flexible and robust. For example, a cross-modal system could be used to improve the accuracy of facial recognition algorithms by using information from other modalities such as body language or voice. Another potential application is using information from one modality to compensate for the limitations of another. For example, if an image recognition algorithm is having difficulty identifying an object due to poor lighting conditions, information from another modality such as sound could be used to help identify the object.

Under this big umbrella sits two families of applications: neural search and creative AI.

Neural Search

One of the most promising applications of cross-modal machine learning is neural search. The core idea of neural search is to leverage state-of-the-art deep neural networks to build every component of a search system. In short, neural search is deep neural network-powered information retrieval. In academia, it’s often called neural IR.

Below is an example of image embedding space generated by DocArray (the data structure behind Jina) and used for content-based image retrieval. Notice how similar images are mapped together in the embedding space.

Searching is as simple as:

db = ...  # a DocumentArray of indexed images
queries = ...  # a DocumentArray of query images

db.find(queries, limit=9)
for d in db:
    for m in d.matches:
        print(d.uri, m.uri, m.scores['cosine'].value)
left/02262.jpg right/03459.jpg 0.21102
left/02262.jpg right/02964.jpg 0.13871843
left/02262.jpg right/02103.jpg 0.18265384
left/02262.jpg right/04520.jpg 0.16477376
...

Neural search is particularly well suited to cross-modal search tasks, because it can learn to map the features of one modality (e.g. text) to the features of another modality (e.g. images). This enables neural search engines to search for documents and images by text queries, and to search for text documents by image queries.

Think outside the (search) box

Many neural search-powered applications do not have a search box:

  • A question-answering chatbot can be powered by neural search: by first indexing all hard-coded QA pairs and then semantically mapping user dialog to those pairs.

  • A smart speaker can be powered by neural search: by applying STT (speech-to-text) and semantically mapping text to internal commands.

  • A recommendation system can be powered by neural search: by embedding user-item information into vectors and finding top-K nearest neighbours of a user/item.

Neural search creates a new way to comprehend the world. It is creating new doors that lead to new businesses.

Creative AI

Another potential application of cross-modal machine learning is creative AI. Creative AI systems use artificial intelligence to generate new content, such as images, videos, or text. For example, Open AI GPT-3 is a machine learning platform that can generate text. The system is trained on a large corpus of text, such as books, articles, and websites. Once trained, the system can generate new text that is similar to the training data. This can be used to generate new articles, stories, or even poems.

OpenAI's DALL·E is another example of a creative AI system. This system generates images from textual descriptions. For example, given the text "a black cat with green eyes", the system will generate an image of a black cat with green eyes. Below is an example of generating images from a text prompt using DALL·E Flow (a text-to-image system built on top of Jina).

server_url = 'grpc://dalle-flow.jina.ai:51005'
prompt = 'an oil painting of a humanoid robot playing chess in the style of Matisse'

from docarray import Document

doc = Document(text=prompt).post(server_url, parameters={'num_images': 8})
da = doc.matches

da.plot_image_sprites(fig_size=(10, 10), show_index=True)
:scale: 80%

Creative AI holds great potential for the future. It has the potential to revolutionize how we interact with machines, helping us create more personalized experiences, e.g.:

  • Create realistic 3D images and videos of people and objects, which can be used in movies, video games, and other visual media.
  • Generate realistic and natural-sounding dialogue, which can be used in movies, video games, and other forms of entertainment.
  • Create new and innovative designs for products, which can be used in manufacturing and other industries.
  • Create new and innovative marketing campaigns, which can be used in advertising and other industries.

Relationship is the key

So what ties neural search and creative AI together?

The "relationship" between or within modalities.

What is this "relationship" are we talking about now? Let's see the following illustration, where we managed to represent text "cat", "dog", "human", "ape" and their images into one embedding space:

:scale: 80%

The "relationship" encodes the following information:

  • The text embedding of "cat" is closer to "dog" (same modality);
  • The text embedding of "human" is closer to "ape" (same modality);
  • The text embedding of "cat" is farther from "human" (same modality);
  • The text embedding of "cat" is closer to the image embedding of "cat" (different modality);
  • The image embedding of "cat" is closer to the image embedding of "dog" (same modality);
  • etc.

Don't underestimate the power of this relationship. It is the foundation of neural search and creative AI. It is like the DNA of a species. Once mastered, it can be used to find the closest match to any other species, and create new species!

:width: 80%

In summary, the key of cross-modal and multi-modal applications is to understand the relationship between modalities. With this relationship, one can use it to find existing data, which is neural search; or use it to make new data, which is creative AI.

In the {ref}next chapter<what-is-jina>, we will see how Jina is the ideal tool for building cross-modal and multi-modal applications on the cloud.