Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Artificial Intellgience [AI] (assistants, machine learning, big data) #28

Open
svgeesus opened this issue Oct 18, 2016 · 12 comments
Open

Comments

@svgeesus
Copy link
Contributor

No description provided.

@draggett
Copy link
Member

draggett commented Nov 3, 2016

Whilst AI can be applied in isolation, much greater benefits are possible when applied to large quantities of data from a breadth of sources. The IoT will enable the collection of vast amounts of data that can be used together with advanced machine learning techniques (including “deep learning”). However, to fully realise the potential, we need to overcome the data silos through open standards for modelling things and their relationships, along with the communication and security details needed for interoperation across different platforms. W3C under the leadership of Sir Tim Berners-Lee (inventor of the Web) has a strong track record in developing standards for semantic technologies. No one organisation can cover the huge range of application domains, so this is all about collaboration and coordination. We also need to address the needs of startups and SMEs as these cannot afford the time or manpower associated with traditional standards processes. We thus need agile processes and open governance frameworks for semantic vocabulary development.

A lot has been said recently about the impact of AI on replacing people across a broad spectrum of jobs, e.g. Martin Ford’s “The rise of the robots”, and a consequent increase of inequality and the social unrest that that entails. We need to take steps to encourage investment in AI based solutions to boost productivity, but to do so in a way that focuses on a collaboration of machines and humans, combining the complementary strengths of both. People are more dexterous than machines, and think more flexibly. More important still is that people are able to interact at a social and emotional level that takes feelings and values into account. Computers may be able to handle vast quantities of information with ease, yet have practically no understanding of what we consider to be common sense. Computer systems can be very efficient, yet very brittle when dealing with situations outside of their training. This could be a disaster at a human level, when dealing with individual circumstances.

Recently, a UK insurance company announced that it was offering discounts to young people seeking to insure their first car based upon an analysis of their facebook posts. This was to be on an opt in basis, and rely on an AI based solution that correlated different writing styles with the likelihood of insurance claims. Allegedly, people who write tidily organized prose are less likely to be overconfident, and less likely to be involved on car accidents! This kind of approach is liable to discriminate against those who don't fit the expected norms, and results in decisions without justifications. The potential business benefits of applying AI need to be weighed up against the loss of flexibility and the risk of discrimination. In this particular example, facebook prohibited the proposed approach as violating the terms of use conditions.

There is an opportunity is to use the Web to facilitate research into the next big step for AI. Whilst deep learning is all the vogue, it is restricted in its scope, and this is often swept under the carpet by the exuberant hype. Deep learning with multi-layer neural networks is primarily about classification problems, e.g. machine vision. By contrast, there is very little attention to giving computers a grasp of common sense. This covers a broad range of skills that us humans take for granted, e.g. knowing that if you push on the end of a piece of string it will bend, or that if you push a pin into a cork, it will make a hole in the cork and not in the pin. As social beings, we have a lot of skills around interacting as part of a group of individuals, why is this person telling me this, what is he feeling, what does he think of me?

Today’s AI is limited to spotting patterns across large numbers of cases. This results in recommendation systems trying to sell me hotel rooms for cities that I just visited, or for shoes that I just bought. Theses systems lack the flexibility of being able to reason at a deeper level based upon competence with common sense skills. Improving on this would improve the value and reduce the brittleness of AI solutions. To address this we need interdisciplinary approaches combining anthropology, cognitive science, advances in machine learning and traditional AI. In particular, I see plenty of potential for using the Web for crowd sourcing the development of training materials for teaching computers to be competent at a broad range of common sense skills. Think about a rather large set of lesson plans for teaching and assessing particular skills. These materials could be used for competitions in which different research groups vie to be the best at the challenges set out for each competition. My ideas on this are strongly inspired by the work by Cognitive Scientists John R. Anderson on ACT-R and AI pioneer Marvin Minsky on abstraction layers for mental processes. The use of regular competitions has been pioneered by DARPA, e.g. for speech processing and more recently, cyber security and advanced robotics.

What role should W3C play in this? One idea is to launch a W3C Interest Group to act as a forum for discussion around the framework and standards needed to enable crowd sourcing of lesson plans for particular skills, along with ideas for combining techniques from different disciplines.

@draggett
Copy link
Member

draggett commented Nov 4, 2016

Amazon and Google now have competing smart speakers which respond to spoken questions and commands, and can be used stream music and to control smart home appliances. They further allow users to access services such as ordering pizza, and answering general knowledge questions. Smart speakers compete with using an increasing range of voice assistants on smart phones such as Apple's Siri, Google Now, and Microsoft's Cortana, amongst others.

Whilst in principle, such assistants have access to vast amounts of information in the cloud, today's solutions are limited in what they understand and how they can respond. This is mostly due to a lack of competence in everyday common sense skills, something that is widely considered to be the next big step for AI. The assistants can be programmed to respond to common kinds of questions. This involves a means to recognise the user's intent and to match this to available service providers, and to translate the results of invoking the service into human speech.

The Web of things can help to create an open market of services accessible from voice assistants through an abstraction layer that decouples applications from the underlying protocols, and which enables interoperability across different platforms. Key to this is the need for open standards for metadata. W3C is in the process of launching a Working Group to standardise the metadata vocabularies for describing the interaction model exposed to applications, as well as the communication and security requirements to enable one platform to interoperate with another.

However, to ensure that providers and consumers of services share the same meaning, we need to enable semantic interoperability. This requires standards for describing different kinds of things and their relationships. This introduces the challenge of encouraging convergence on metadata terms. If different communities independently develop their own sets of terms, we're going to have problems in understanding each other. This prompts the need for discussion around agile processes for vocabulary management, along with the associated governance models. What role can W3C play in this?

@dontcallmedom dontcallmedom changed the title AI (assistants, machine learning, big data) Artificial Intellgience [AI] (assistants, machine learning, big data) Jun 13, 2017
@dontcallmedom
Copy link
Member

@dontcallmedom
Copy link
Member

an early exploration of an API to facilitate building machine learning in the browser: https://angelokai.github.io/WebML/

@nitedog
Copy link

nitedog commented Nov 16, 2017

Artificial Intelligence is also relevant for accessibility. For example, to detect patterns during authoring or in existing content that present potential barriers. These are a similar challenges to detecting potential security vulnerabilities in code. That is, accessibility could leverage developments in other fields, while at the same time solutions initially developed for other fields could expand their market into accessibility.

Another use of Artificial Intelligence is to increase the capabilities of services, browsers, and assistive technologies to better support accessibility. For example, image recognition is already being employed by some social media platforms -- primarily for indexing purposes but sometimes it is also offered as an accessibility feature. Also first uses are being employed by some assistive technology developers.

These developments could have significant impact on accessibility, including the definition of accessible versus inaccessible content. They may impact specification accessibility requirements and development within W3C, as well as the markets and players in web accessibility around W3C. In particular, increased automation and tools support rapidly changes the cost-benefit analysis for accessibility.

W3C can facilitate developments in this area by mapping out the challenges that could be overcome by artificial intelligence, and by providing commonly agreed on test cases to validate the quality of different implementations. This is already happening to a certain degree in the area of Accessibility Conformance Testing (ACT), where automation is essential to help address the sheer amount of existing content.

@siusin
Copy link

siusin commented Jan 25, 2018

AI in Front-end Development: Turning Design Mockups Into Code With Deep Learning

@wseltzer
Copy link
Member

wseltzer commented Mar 8, 2018

@vivienlacourba
Copy link
Member

vivienlacourba commented Mar 30, 2018

The French President announced some plan on this topic, see a summary (in french) on lesnumeriques.com (Google translate version)

It mentions: a national effort coordinated by Inria, a broader opening of open data, privacy issues, public national and european financing, autonomous vehicle.

@plehegar
Copy link
Member

plehegar commented Jun 7, 2018

with the release of TensorFlow, Core ML, Azure ML, this is becoming more real. TF uses WebGL and I guess we should expect WebGPU to become relevant in this area.

@pchampin
Copy link

Since this issue was open, a number of groups were started, which I believe cover it pretty well:

Seems to me that this issue can be closed, possibly opening a new one on a more specific aspect which still requires attention. @draggett WDYT?

@draggett
Copy link
Member

@pchampin: I still think we need a strategic view on where things are going and how W3C could contribute. For example, the challenges around the democratisation of AI along with the emerging role of AI in digital transformation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Development

No branches or pull requests

9 participants