The model has been added to Seeing AI, a free app for people with visual impairments that uses a smartphone camera to read text, identify people, and describe objects and surroundings. It’s also now available to app developers through the Computer Vision API in Azure Cognitive Services, and will start rolling out in Microsoft Word, Outlook, and PowerPoint later this year. The model can generate “alt text” image descriptions for web pages and documents, an important feature for people with limited vision that’s all-too-often unavailable. The algorithm now tops the leaderboard of an image-captioning benchmark called nocaps. Microsoft achieved this by pre-training a large AI model on a dataset of images paired with word tags — rather than full captions, which are less efficient to create. Each of the tags was mapped to a specific object in an image. [Read: Microsoft unveils efforts to make AI more accessible to people with disabilities] The pre-trained model was then fine-tuned on a dataset of captioned images, which enabled it to compose sentences. It then used its “visual vocabulary” to create captions for images containing novel objects. Microsoft said the model is twice as good as the one it’s used in products since 2015. The image below shows how these improvements work in practice: However, the benchmark performance achievement doesn’t mean the model will be better than humans at image captioning in the real world. Harsh Agrawal, one of the creators of the benchmark, told The Verge that its evaluation metrics “only roughly correlate with human preferences” and that it “only covers a small percentage of all the possible visual concepts.”

Microsoft s image captioning AI is pretty darn good at describing pictures - 75