My Week At CES: AI With Everything

CES. The tech event of the year. 100,000 tech fanatics, 32,949 tech buyers, and 7,545 journalists each jostling to see exactly what the next twelve months are going to be serving up on a silver motherboard. So, had the future arrived? My week at CES was bitter sweet. Here’s why.

Virtual Reality (is what I’m looking for… and I might just have to keep on looking)

I’ll admit it. I was pretty excited at the prospect of what would be happening in the realm of VR. But what was happening was very little – quite unlike the wizardry VR and augmented reality offerings I imagined, CES seemed to present product after product of affordable VR headsets, each were hoping to get in on the Oculus/HTC action (take Lenovo as a typical example). Whilst there were a few start-ups excitedly announcing their VR/AR prototypes, it’s a sad fact of technological life that without a funding miracle, they’ll likely come to little more than the prototypes of their present form.

AI, next. Onwards and upwards. I thought.

Artificial Intelligence (Artificial being the operative word)


‘AI’ appeared everywhere at CES. Every product, every gadget, everywhere – from self-rocking robot crib Snoo bassinet, to the sleep habit-tracking Beddit Bed.

But, whilst it was all pretty smart, it was also a long way of true AI. Take the house bot Kuri – the makers of this robot have bestowed Kuri with the tagline of being “insanely cute with some serious technology”. And this little guy can do all sorts of things – he can recognise voices, avoid taking a tumble down the stairs and respond with sounds, flashy lights and emotive eyes. Impressive specs, innovative tech – but the purpose escapes me.

Right now, I can’t help but feel that all too many are rushing to sate the consumer appetite for AI novelty. This artificial intelligence is all feeling a little artificial – failing abjectly to actually progress onto machine learning, and authentic AI that truly integrates with every element of our lives – helping us live better, healthier and assisting us to advance.

Just like a kid a Christmas who didn’t get that toy he’d set his heart on, I’m left drawing on feelings that range from mild disappointment to stamp-my-feet, all-out frustration.

We’re missing a trick here. And worse still, consumers are soon going to wise up to the fact that ‘AI’ really means nothing more than pretty damn smart, but not quite intelligent.

AI, Cars and going back to the future


Concept cars designers have long since seemed to root around in a generic imaginary scrap yard of parts and features – the hidden wheels that are so hidden it seems to suggest that soon, our dreams of Total Recall hover cars are going to become reality; those side lifting doors that are most definitely obligatory and strangely Back to the Future Delorean-esque (seriously, if they didn’t take off in the 80’s, I’m really not sure why they would now).

But asides from questions about style, do they have any substance – do they provide a subtext for what’s to come?

Showcasing their wears were Nissan, Audi, BMW, Faraday Future and just about every car manufacturer under the sun (a sun that notably has failed to start solar powering our cars – as was suggested by Ford’s concept car back at CES 2014).

Features included BMW’s floating displays; Faraday’s impressive 1000 horse power beast; and Nissan’s ability to grab control of a car where dangerous circumstances arise (with a special nod granted to Mercedes-Benz, who demoed their autonomous drone-like delivery van).


But with so much hustle and bustle at CES, I wonder whether they have any inkling that driverless cars may well be much ado about nothing. A recent survey found that consumers still remain uncertain as to whether they want anything more than car tech that can park for them, stop an accident or prevent their car from being stolen.

Perhaps we simply don’t trust tech in this way – if so, the question is, will we ever? Consumer trust hasn’t been helped by the much publicised Tesla fatality, although it must be said that less attention has been paid to the Tesla owner saved by his own car (that’s despite the autopilot not even being engaged).

In any event – all this may well be by the by. Even if there is appetite, there are other stumbling blocks too. As Kevin Clark chief executive of Delphi (a company launching its own concept car at CES), points out:

“The reality is that the tech exists today… the biggest problem for the manufacturers is the cost, legal and liability responsibilities.” – Kevin Clark chief executive of Delphi


It seems then that the realm of driverless cars are potentially as unwanted and challenged as AI gadgets are lacking in actual AI. And that, is why my week at CES was bitter sweet – so much promise, so little delivered. But there’s always next year, right? Tell me what you think

Should we be scared of artificial intelligence?

Recent technology news coverage has been dominated by growing concerns over the development of artificial intelligence (AI). My interest was piqued by a particular Forbes article that frames this topic in terms of recent advancements and explores a variety of viewpoints from industry heavyweights. For your own reference, the article can be found here:

The author, Theo Priestley, hooked me and prompted me to read the full article – one in which he fundamentally disagrees with big technology players such as Elon Musk and Stephen Hawking. Hang on, why would someone be disagreeing with two of the most forward-thinking and well-respected human beings on our planet?! If anyone is going to know something about this growing phenomenon, surely it would be either of those people? We should be paying attention, not calling them ‘wrong’! Right?

Elon Musk, founder and CEO of Space Exploration Technologies Corp, says: “I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that.”’

Eminent physicist Stephen Hawking’s view is that “The development of full artificial intelligence could spell the end of the human race.”

Sure, the unknown is a scary thing, and the movie industry has done much to perpetuate the widely accepted ‘future’ of Artificial Intelligence – we’ve all seen the movies. There is a formulaic and uniform outcome – the Artificially Intelligent being turns on its creator, struggles for power, and is ultimately a force of destruction.

Theo describes how, in the real world, artificial intelligence handles queries, helps users and solves problems. This is possible through ‘understanding human behaviour, rather than the traditional method of building artificial intelligence by mimicking how the brain works through algorithms.’

Artificial intelligence products already on the market include Siri, Cortana and Google (OK Google). The newest AI to launch is ‘M’, the Facebook virtual assistant.

‘M’ would work in the same way as the main character Eva in the movie ‘Ex Machina’. (sidenote: highly recommended viewing!)

The data gets pulled; this will include personal information; names, nicknames, ages, etc. An important point Theo raises is that Facebook also owns WhatsApp and Instagram, meaning its data pull is far wider than any other network. It has access to phone numbers, photos, filter preferences, bio words and much, much more that the average user may not have considered before. This should make dealing with human queries an easy task, as they already have access to most of a user’s information and can give them tailored results specific to that individual.

Part of the training and development of ‘M’ relies heavily on human ‘assistants’; monitoring and tracking their own endeavours to solve problems – which websites are they visiting to glean the best information, which keywords are typed, etc. This will lead to further precise alterations to align ‘M’ more closely with human behavior patterns.

You may think this will lead to a brand new type of breach into personal data or even personal freedom. However, companies such as Facebook have already been manipulating users’ news feeds as part of psychology experiments. According to them, this measures how emotions spread across social media. Personally I feel that Facebook may have done this to understand human behaviour in a deeper context. Our once tacit and private thoughts, reactions and emotions are now on the open market to be measured, monitored and then ultimately mimicked.

With that in mind, I share some of Theo Priestley’s concerns and feel we should be proceeding with caution and prudence when allowing certain companies to create AI and be carefully examining their motives and end goals.

What are your thoughts on this? Do you think we should fear Artificial Intelligence? Or is the development of AI safe if the right organisations are at the forefront?

Leave a comment below, even if it just about how much you love/hate Ex Machina!!