Refer and ground anything anywhere at any granularity
A new type of multimodal large language model (MLLM) from Apple that excels in both image understanding and language processing, particularly demonstrating significant advantages in understanding spatial references.