Getty
As businesses strive for seamless and accurate customer interactions to build their market momentum, they are increasingly leaning on A.I. The current wave of A.I. advancements, especially in the context of language processing, is groundbreaking.
Yet, diving headfirst into A.I. without a comprehensive understanding and misusing it for customer service can result in pitfalls that may not be immediately visible. The subtleties of reputational risk can be hidden within micro-communications, and discrepancies in interactions between potential customers and A.I. chatbots might not become apparent until significant damage has occurred.
For business stakeholders, understanding the depth and breadth of A.I. can provide a competitive advantage and influence how you deploy and utilize A.I. in your business.….Continue reading…
Source: How Bad A.I. Can Hurt Your Company’s Customer Service | Inc.com
.
Critics:
Many problems in AI (including in reasoning, planning, learning, perception, and robotics) require the agent to operate with incomplete or uncertain information. AI researchers have devised a number of tools to solve these problems using methods from probability theory and economics.
Bayesian networks are a very general tool that can be used for many problems, including reasoning (using the Bayesian inference algorithm), learning (using the expectation-maximization algorithm), planning (using decision networks) and perception (using dynamic Bayesian networks).
Probabilistic algorithms can also be used for filtering, prediction, smoothing and finding explanations for streams of data, helping perception systems to analyze processes that occur over time (e.g., hidden Markov models or Kalman filters).
Precise mathematical tools have been developed that analyze how an agent can make choices and plan, using decision theory, decision analysis, and information value theory. These tools include models such as Markov decision processes, dynamic decision networks, game theory and mechanism design.
The simplest AI applications can be divided into two types: classifiers (e.g. “if shiny then diamond”), on one hand, and controllers (e.g. “if diamond then pick up”), on the other hand. Classifiers are functions that use pattern matching to determine the closest match. They can be fine-tuned based on chosen examples using supervised learning.
Each pattern (also called an “observation“) is labeled with a certain predefined class. All the observations combined with their class labels are known as a data set. When a new observation is received, that observation is classified based on previous experience. There are many kinds of classifiers in use. The decision tree is the simplest and most widely used symbolic machine learning algorithm. K-nearest neighbor algorithm was the most widely used analogical AI until the mid-1990s, and Kernel methods such as the support vector machine (SVM) displaced k-nearest neighbor in the 1990s.
The naive Bayes classifier is reportedly the “most widely used learner”at Google, due in part to its scalability. Neural networks are also used as classifiers. Machine learning algorithms require large amounts of data. The techniques used to acquire this data have raised concerns about privacy, surveillance and copyright.
Technology companies collect a wide range of data from their users, including online activity, geolocation data, video and audio. For example, in order to build speech recognition algorithms, Amazon others have recorded millions of private conversations and allowed temps to listen to and transcribe some of them.
Opinions about this widespread surveillance range from those who see it as a necessary evil to those for whom it is clearly unethical and a violation of the right to privacy. AI developers argue that this is the only way to deliver valuable applications. and have developed several techniques that attempt to preserve privacy while still obtaining the data, such as data aggregation, de-identification and differential privacy.
Since 2016, some privacy experts, such as Cynthia Dwork, began to view privacy in terms of fairness — Brian Christian wrote that experts have pivoted “from the question of ‘what they know’ to the question of ‘what they’re doing with it’.”. Generative AI is often trained on unlicensed copyrighted works, including in domains such as images or computer code; the output is then used under a rationale of “fair use“.
Experts disagree about how well, and under what circumstances, this rationale will hold up in courts of law; relevant factors may include “the purpose and character of the use of the copyrighted work” and “the effect upon the potential market for the copyrighted work”. In 2023, leading authors (including John Grisham and Jonathan Franzen) sued AI companies for using their work to train generative AI.
YouTube, Facebook and others use recommender systems to guide users to more content. These AI programs were given the goal of maximizing user engagement (that is, the only goal was to keep people watching). The AI learned that users tended to choose misinformation, conspiracy theories, and extreme partisan content, and, to keep them watching, the AI recommended more of it.
Users also tended to watch more content on the same subject, so the AI led people into filter bubbles where they received multiple versions of the same misinformation. This convinced many users that the misinformation was true, and ultimately undermined trust in institutions, the media and the government.
The AI program had correctly learned to maximize its goal, but the result was harmful to society. After the U.S. election in 2016, major technology companies took steps to mitigate the problem. In 2022, generative AI began to create images, audio, video and text that are indistinguishable from real photographs, recordings, films or human writing. It is possible for bad actors to use this technology to create massive amounts of misinformation or propaganda.
This technology has been widely distributed at minimal cost. Geoffrey Hinton (who was an instrumental developer of these tools) expressed his concerns about AI disinformation. He quit his job at Google to freely criticize the companies developing AI….
Related contents:
- Nicas, Jack (7 February 2018). “How YouTube Drives People to the Internet’s Darkest Corners”. The Wall Street Journal. ISSN 0099-9660. Retrieved 16 June 2018.
- Williams, Rhiannon (28 June 2023), “Humans may be more likely to believe disinformation generated by AI”, MIT Technology Review
- Metz, Cade (4 May 2023). “‘The Godfather of A.I.’ Quits Google and Warns of Danger Ahead”. The New York Times. Archived from the original on 1 July 2023.
- Valinsky, Jordan (11 April 2019), “Amazon reportedly employs thousands of people to listen to your Alexa conversations”, CNN.com
- Vincent, James (15 November 2022). “The scary truth about AI copyright is nobody knows what will happen next”. The Verge. Archived from the original on 19 June 2023. Retrieved 19 June 2023.
- Reisner, Alex (19 August 2023), “Revealed: The Authors Whose Pirated Books are Powering Generative AI”, The Atlantic
- Alter, Alexandra; Harris, Elizabeth A. (20 September 2023), “Franzen, Grisham and Other Prominent Authors Sue OpenAI”, The New York Times
- Simonite, Tom (31 March 2016). “How Google Plans to Solve Artificial Intelligence”. MIT Technology Review.
- Harari, Yuval Noah (2023). “AI and the future of humanity”. YouTube.
- Valance, Christ (30 May 2023). “Artificial intelligence could lead to extinction, experts warn”. BBC News. Archived from the original on 17 June 2023. Retrieved 18 June 2023.
- Urbina, Fabio; Lentzos, Filippa; Invernizzi, Cédric; Ekins, Sean (7 March 2022). “Dual use of artificial-intelligence-powered drug discovery”. Nature Machine Intelligence. 4 (3): 189–191. doi:10.1038/s42256-022-00465-9. PMC 9544280. PMID 36211133. S2CID 247302391.
- Rose, Steve (11 July 2023). “AI Utopia or dystopia?”. The Guardian Weekly. pp. 42–43.
- Grant, Nico; Hill, Kashmir (22 May 2023). “Google’s Photo App Still Can’t Find Gorillas. And Neither Can Apple’s”. The New York Times.
- Berdahl, Carl Thomas; Baker, Lawrence; Mann, Sean; Osoba, Osonde; Girosi, Federico (7 February 2023). “Strategies to Improve the Impact of Artificial Intelligence on Health Equity: Scoping Review”. JMIR AI. 2: e42936. doi:10.2196/42936. ISSN 2817-1705. S2CID 256681439. Archived from the original on 21 February 2023. Retrieved 21 February 2023.
- Dockrill, Peter (27 June 2022), “Robots With Flawed AI Make Sexist And Racist Decisions, Experiment Shows”, Science Alert, archived from the original on 27 June 2022
- Ustun, B.; Rudin, C. (2016). “Supersparse linear integer models for optimized medical scoring systems”. Machine Learning. 102 (3): 349–391. doi:10.1007/s10994-015-5528-6. S2CID 207211836.
- Schmidhuber, Jürgen (2022). “Annotated History of Modern AI and Deep Learning”.
- Chen, Stephen (25 March 2023). “Artificial intelligence, immune to fear or favour, is helping to make China’s foreign policy | South China Morning Post”. Archived from the original on 25 March 2023. Retrieved 26 March 2023.
- Vogels, Emily A. (24 May 2023). “A majority of Americans have heard of ChatGPT, but few have tried it themselves”. Pew Research Center. Archived from the original on 8 June 2023. Retrieved 15 June 2023.
- Kobielus, James (27 November 2019). “GPUs Continue to Dominate the AI Accelerator Market for Now”. InformationWeek. Archived from the original on 19 October 2021. Retrieved 11 June 2020.
- Gertner, Jon (18 July 2023). “Wikipedia’s Moment of Truth – Can the online encyclopedia help teach A.I. chatbots to get their facts right — without destroying itself in the process? + comment”. The New York Times. Archived from the original on 18 July 2023. Retrieved 19 July 2023.
- Hornik, Kurt; Stinchcombe, Maxwell; White, Halbert (1989). Multilayer Feedforward Networks are Universal Approximators (PDF). Neural Networks. Vol. 2. Pergamon Press. pp. 359–366.
- Cybenko, G. (1988). Continuous valued neural networks with two hidden layers are sufficient (Report). Department of Computer Science, Tufts University.
- Good, I. J. (1965), Speculations Concerning the First Ultraintelligent Machine
- Wong, Matteo (19 May 2023), “ChatGPT Is Already Obsolete”, The Atlantic
- Christian, Brian (2020). The Alignment Problem: Machine learning and human values. W. W. Norton & Company. ISBN 978-0-393-86833-3. OCLC 1233266753.
Leave a Reply