Connect with us

Hi, what are you looking for?

Economy

Index – lore – iPhones will indeed be able to speak for us

Index – lore – iPhones will indeed be able to speak for us

Apple will present its next mobile operating system, iOS 17, at its WWDC developer conference in June. However, the company has already streamed an interesting add-on option, which may debut later this year, he writes HVG.

The new function – which should help those whose ability to speak may be affected by certain diseases over time –

It will create a synthesized voice similar to the user’s voice.

According to the signals, the system will read the entered text out loud. The development is called Personal Voice and will allow iPhones and iPads to create digital versions of a user’s voice. All this he will be able to use for face-to-face conversations as well as for phone calls, FaceTime video and audio calls.

All you need is a 15-minute audio recording

The user will be able to create the audio by recording a 15-minute audio clip with their device, which will be processed by machine learning technology running locally on the device in order to protect privacy. In addition, Apple announced other accessibility features that will help people with disabilities and their caregivers use the devices.

Among the novelties related to machine learning is the further development of the Magnifier assistant option. The feature will debut as a detection mode and will simultaneously use the camera, LiDAR, and machine learning to display text on the device’s screen. The magnifier really helps to improve the visibility of objects and files in many others as well.

Apple will certainly present the newly announced new features in action at WWDC, and after the event they may also appear in the initial iOS 17 beta — however, we’ll only know for sure after the conference, which begins on June 5th.