Getty
As businesses strive for seamless and accurate customer interactions to build their market momentum, they are increasingly leaning on A.I. The current wave of A.I. advancements, especially in the context of language processing, is groundbreaking. Yet, diving headfirst into A.I. without a comprehensive understanding and misusing it for customer service can result in pitfalls that may not be immediately visible. The subtleties of reputational risk can be hidden within micro-communications, and discrepancies in interactions.….Continue reading…
Source: Inc.com
.
Critics:
Many problems in AI (including in reasoning, planning, learning, perception, and robotics) require the agent to operate with incomplete or uncertain information. AI researchers have devised a number of tools to solve these problems using methods from probability theory and economics.
Bayesian networks are a very general tool that can be used for many problems, including reasoning (using the Bayesian inference algorithm), learning (using the expectation-maximization algorithm), planning (using decision networks) and perception (using dynamic Bayesian networks).
Probabilistic algorithms can also be used for filtering, prediction, smoothing and finding explanations for streams of data, helping perception systems to analyze processes that occur over time (e.g., hidden Markov models or Kalman filters).
Precise mathematical tools have been developed that analyze how an agent can make choices and plan, using decision theory, decision analysis, and information value theory. These tools include models such as Markov decision processes, dynamic decision networks, game theory and mechanism design.
The simplest AI applications can be divided into two types: classifiers (e.g. “if shiny then diamond”), on one hand, and controllers (e.g. “if diamond then pick up”), on the other hand. Classifiers are functions that use pattern matching to determine the closest match. They can be fine-tuned based on chosen examples using supervised learning.
Each pattern (also called an “observation“) is labeled with a certain predefined class. All the observations combined with their class labels are known as a data set. When a new observation is received, that observation is classified based on previous experience. There are many kinds of classifiers in use. The decision tree is the simplest and most widely used symbolic machine learning algorithm. K-nearest neighbor algorithm was the most widely used analogical AI until the mid-1990s, and Kernel methods such as the support vector machine (SVM) displaced k-nearest neighbor in the 1990s.
The naive Bayes classifier is reportedly the “most widely used learner”at Google, due in part to its scalability. Neural networks are also used as classifiers. Machine learning algorithms require large amounts of data. The techniques used to acquire this data have raised concerns about privacy, surveillance and copyright.
Technology companies collect a wide range of data from their users, including online activity, geolocation data, video and audio. For example, in order to build speech recognition algorithms, Amazon others have recorded millions of private conversations and allowed temps to listen to and transcribe some of them.
Opinions about this widespread surveillance range from those who see it as a necessary evil to those for whom it is clearly unethical and a violation of the right to privacy. AI developers argue that this is the only way to deliver valuable applications. and have developed several techniques that attempt to preserve privacy while still obtaining the data, such as data aggregation, de-identification and differential privacy.
Since 2016, some privacy experts, such as Cynthia Dwork, began to view privacy in terms of fairness — Brian Christian wrote that experts have pivoted “from the question of ‘what they know’ to the question of ‘what they’re doing with it’.”. Generative AI is often trained on unlicensed copyrighted works, including in domains such as images or computer code; the output is then used under a rationale of “fair use“.
Experts disagree about how well, and under what circumstances, this rationale will hold up in courts of law; relevant factors may include “the purpose and character of the use of the copyrighted work” and “the effect upon the potential market for the copyrighted work”. In 2023, leading authors (including John Grisham and Jonathan Franzen) sued AI companies for using their work to train generative AI.
YouTube, Facebook and others use recommender systems to guide users to more content. These AI programs were given the goal of maximizing user engagement (that is, the only goal was to keep people watching). The AI learned that users tended to choose misinformation, conspiracy theories, and extreme partisan content, and, to keep them watching, the AI recommended more of it.
Users also tended to watch more content on the same subject, so the AI led people into filter bubbles where they received multiple versions of the same misinformation. This convinced many users that the misinformation was true, and ultimately undermined trust in institutions, the media and the government.
The AI program had correctly learned to maximize its goal, but the result was harmful to society. After the U.S. election in 2016, major technology companies took steps to mitigate the problem. In 2022, generative AI began to create images, audio, video and text that are indistinguishable from real photographs, recordings, films or human writing. It is possible for bad actors to use this technology to create massive amounts of misinformation or propaganda.
This technology has been widely distributed at minimal cost. Geoffrey Hinton (who was an instrumental developer of these tools) expressed his concerns about AI disinformation. He quit his job at Google to freely criticize the companies developing AI….
Related contents:
“How YouTube Drives People to the Internet’s Darkest Corners”. The Wall Street Journal. ISSN 0099-9660.
Humans may be more likely to believe disinformation generated by AI”, MIT Technology Review
“‘The Godfather of A.I.’ Quits Google and Warns of Danger Ahead”. The New York Times. Archived
“Amazon reportedly employs thousands of people to listen to your Alexa conversations”, CNN.com
“The scary truth about AI copyright is nobody knows what will happen next”.
“Revealed: The Authors Whose Pirated Books are Powering Generative AI”, The Atlantic
“Franzen, Grisham and Other Prominent Authors Sue OpenAI”, The New York Times
“How Google Plans to Solve Artificial Intelligence”. MIT Technology Review.
“AI and the future of humanity”. YouTube.
“Artificial intelligence could lead to extinction, experts warn”. BBC News.
“Dual use of artificial-intelligence-powered drug discovery
“Google’s Photo App Still Can’t Find Gorillas. And Neither Can Apple’s”. The New York Times.
. “Strategies to Improve the Impact of Artificial Intelligence on Health Equity: Scoping Review”.
“Robots With Flawed AI Make Sexist And Racist Decisions, Experiment Shows”,
“Supersparse linear integer models for optimized medical scoring systems”.
Schmidhuber, Jürgen (2022). “Annotated History of Modern AI and Deep Learning”
“A majority of Americans have heard of ChatGPT, but few have tried it themselves”.
“GPUs Continue to Dominate the AI Accelerator Market for Now”.
Multilayer Feedforward Networks are Universal Approximators
Leave a Reply