52 online
 
Most Popular Choices
Share on Facebook 12 Printer Friendly Page More Sharing
OpEdNews Op Eds    H3'ed 5/9/14

Cognition as a Service: Can next-gen creepiness be countered with crowd-sourced ethics?

By       (Page 1 of 2 pages)   1 comment
Follow Me on Twitter     Message David Solomonoff

Now that marketers use cloud computing to offer everything as a service: infrastructure as a service, platform as a service, and software as a service, what's left?

Cognitive computing, of course.

Cognition as a service (CaaS) is the next buzzword you'll be hearing. Going from the top of the stack to directly inside the head, AI in the cloud will power mobile and embedded devices to do things they don't have the on-board capabilities for, such as speech recognition, image recognition, and natural-language processing (NLP). Apple's Siri cloud-based voice recognition was one of the first out of the gate but a stampede is joining the fray including Wolfram Alpha, IBM's Watson, Google Now, and Cortana, as well as newer players like Ginger, ReKognition, and Jetlore.

Companies want to know more about their customers, business partners, competitors, and employees -- as do governments about their citizens and cybercriminals about their potential victims. The cloud will connect the Internet of Things (IoT) via machine-to-machine (M2M) communications -- to achieve that goal.

The cognitive powers required will be embedded in operating systems so that apps can easily be developed by accessing the desired functionality through an API rather than requiring each developer to reinvent the wheel.

Everything in your daily life will become smarter -- "context-sensitive" is another new buzz-phrase -- as devices provide a personalized experience based on databases of accumulated personal information combined with intelligence gleaned from large data sets.

The obvious question is to what extent the personalized experience is determined by the individual user as opposed to corporations, governments, and criminals. Vint Cerf, "the father of the Internet" and Google's Internet Evangelist, recently warned of the privacy and security issues raised by the IoT.

But above and beyond the dangers of automated human malfeasance is the danger of increasingly intelligent tools developing an attitude problem.

Stephen Hawking recently warned of the dangers of AI running amuck:

Success in creating AI would be the biggest event in human history. it might also be the last, unless we learn how to avoid the risks. AI may transform our economy to bring both great wealth and great dislocation. There is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains. One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

Eben Moglen warned specifically about mobile devices that know too much and whose inner workings (and motivations, if they are actually intelligent) are unknown:

... we grew up thinking about freedom and technology under the influence of the science fiction of the 1960s. Visionaries perceived that in the middle of the first quarter of the 21st century, we'd be living contemporarily with robots.

They were correct. We do. They don't have hands and feet. Most of the time we're the bodies. We're the hands and feet. We carry them everywhere we go. They see everything which allows other people to predict and know our conduct and intentions and capabilities better than we can predict them ourselves.

But we grew up imagining that these robots would have, incorporated in their design, a set of principles.

We imagined that robots would be designed so that they could never hurt a human being. These robots have no such commitments. These robots hurt us every day.

They work for other people. They're designed, built and managed to provide leverage and control to people other than their owners. Unless we retrofit the first law of robotics onto them immediately, we're cooked.

Next Page  1  |  2

(Note: You can view every article as one long page if you sign up as an Advocate Member, or higher).

Rate It | View Ratings

David Solomonoff Social Media Pages: Facebook page url on login Profile not filled in       Twitter page url on login Profile not filled in       Linkedin page url on login Profile not filled in       Instagram page url on login Profile not filled in

David Solomonoff is President of New York Chapter of Internet Society, http://isoc-ny.org a nonprofit that works for open development of technology, Internet freedom and access for all.

Go To Commenting
The views expressed herein are the sole responsibility of the author and do not necessarily reflect those of this website or its editors.
Follow Me on Twitter     Writers Guidelines

 
Contact AuthorContact Author Contact EditorContact Editor Author PageView Authors' Articles
Support OpEdNews

OpEdNews depends upon can't survive without your help.

If you value this article and the work of OpEdNews, please either Donate or Purchase a premium membership.

STAY IN THE KNOW
If you've enjoyed this, sign up for our daily or weekly newsletter to get lots of great progressive content.
Daily Weekly     OpEd News Newsletter
Name
Email
   (Opens new browser window)
 

Most Popular Articles by this Author:     (View All Most Popular Articles by this Author)

Cognition as a Service: Can next-gen creepiness be countered with crowd-sourced ethics?

Net governance is a game -- play it to win

To View Comments or Join the Conversation:

Tell A Friend