On Siri, Privacy and Consistency

Apple talks often and proudly about respecting its users privacy, about not sharing or storing information ‘in the cloud and about using machine learning and algorithms on the users device. About not sharing or storing information ‘in the cloud. About using machine learning and algorithms on the users device.

Yet despite all talk of privacy and machine learning, Siri is dependent upon Apple’s servers to decipher spoken instructions.

This dependency on server based processing has a negative impact Siri’s effectiveness. On anything less than wifi or a 4G connection Siri is often unable to recognise speech.

I now drive to and from the office and I thought Siri would be a great tool for capturing thoughts and tasks as I drive. Much of my route takes me through the English countryside, where connections range from 4G to GPRS and everything in-between. At least 50% of the times I attempt to use Siri it fails, or I receive the message that my watch will tap me on the wrist when it is ready – it never does.

So why if Apple is so keen to process information on devices is Siri completely dependent upon ‘the cloud’? Having Siri process speech on the device would improve it’s effectiveness and usefulness.

Hell, it might “just work”.