Apple released this Thursday (1) the fourth beta of the future iOS 16.2in addition to their respective versions for other devices such as iPad, Watch, TV and Mac.
The main novelty of this beta, announced by the company itself, was support for the project Stable Diffusionwhich creates sophisticated images by artificial intelligence.
Today, we’re excited to release Core ML optimizations for Stable Broadcast in macOS 13.1 and iOS 16.2, along with code to start deploying to Apple Silicon devices.
This type of service is capable of generating images entirely created by artificial intelligence, the user just having to write a sentence of what he wants. For example, “a high quality photo of an astronaut riding a (horse/dragon) in space🇧🇷 The result is this:
As a result, image processing will be much faster on iPhones, iPads and Macs with an Apple Silicon processor.
Currently the processing of these images is done in the cloud. Apple’s intention is to encourage developers to implement this processing directly in their applications, so that it is done directly on the local machine, thus preserving user privacy.
Apple has yet to release a release date for the final version of iOS 16.2, but we have good reason to believe it will be December 12th.