Wednesday, February 19, 2025

How Far Would You Trust AI To Make Important Decisions? 

Photos Hobby, Unsplash

From tailored Netflix recommendations to personalized Facebook feeds, artificial intelligence (AI) adeptly serves content that matches our preferences and past behaviors. But while a restaurant tip or two is handy, how comfortable would you be if AI-algorithms were in charge of your medical expert or new hire? Now, a new study from the University of South Australia shows that most people are more likely to  AI in situations where the stakes are low, such as music suggestions……..Continue reading

By: 

Source: Tech Xplore

.

Critics:

Knowledge representation and knowledge engineering allow AI programs to answer questions intelligently and make deductions about real-world facts. Formal knowledge representations are used in content-based indexing and retrieval, scene interpretation, clinical decision support, knowledge discovery (mining “interesting” and actionable inferences from large databases), and other areas.

Among the most difficult problems in knowledge representation are the breadth of commonsense knowledge (the set of atomic facts that the average person knows is enormous); and the sub-symbolic form of most commonsense knowledge (much of what people know is not represented as “facts” or “statements” that they could express verbally). There is also the difficulty of knowledge acquisition, the problem of obtaining knowledge for AI applications.

An “agent” is anything that perceives and takes actions in the world. A rational agent has goals or preferences and takes actions to make them happen. In automated planning, the agent has a specific goal. In automated decision-making, the agent has preferences—there are some situations it would prefer to be in, and some situations it is trying to avoid. The decision-making agent assigns a number to each situation (called the “utility”) that measures how much the agent prefers it.

For each possible action, it can calculate the “expected utility”: the utility of all possible outcomes of the action, weighted by the probability that the outcome will occur. It can then choose the action with the maximum expected utility. In classical planning, the agent knows exactly what the effect of any action will be. In most real-world problems, however, the agent may not be certain about the situation they are in

(it is “unknown” or “unobservable”) and it may not know for certain what will happen after each possible action (it is not “deterministic”). It must choose an action by making a probabilistic guess and then reassess the situation to see if the action worked. In some problems, the agent’s preferences may be uncertain, especially if there are other agents or humans involved. These can be learned (e.g., with inverse reinforcement learning), or the agent can seek information to improve its preferences. 

Information value theory can be used to weigh the value of exploratory or experimental actions. The space of possible future actions and situations is typically intractably large, so the agents must take actions and evaluate situations while being uncertain of what the outcome will be. Machine learning is the study of programs that can improve their performance on a given task automatically. It has been a part of AI from the beginning. There are several kinds of machine learning.

Unsupervised learning analyzes a stream of data and finds patterns and makes predictions without any other guidance. Supervised learning requires labeling the training data with the expected answers, and comes in two main varieties: classification (where the program must learn to predict what category the input belongs in) and regression (where the program must deduce a numeric function based on numeric input).

In reinforcement learning, the agent is rewarded for good responses and punished for bad ones. The agent learns to choose responses that are classified as “good”. Transfer learning is when the knowledge gained from one problem is applied to a new problem. Deep learning is a type of machine learning that runs inputs through biologically inspired artificial neural networks for all of these types of learning. Computational learning theory can assess learners by computational complexity, by sample complexity (how much data is required), or by other notions of optimization.

Affective computing is an interdisciplinary umbrella that comprises systems that recognize, interpret, process, or simulate human feeling, emotion, and mood.For example, some virtual assistants are programmed to speak conversationally or even to banter humorously; it makes them appear more sensitive to the emotional dynamics of human interaction, or to otherwise facilitate human–computer interaction.

However, this tends to give naïve users an unrealistic conception of the intelligence of existing computer agents. Moderate successes related to affective computing include textual sentiment analysis and, more recently, multimodal sentiment analysis, wherein AI classifies the effects displayed by a videotaped subject.

AI can solve many problems by intelligently searching through many possible solutions. There are two very different kinds of search used in AI: state space search and local search. State space search searches through a tree of possible states to try to find a goal state. For example, planning algorithms search through trees of goals and subgoals, attempting to find a path to a target goal, a process called means-ends analysis.

Simple exhaustive searches are rarely sufficient for most real-world problems: the search space (the number of places to search) quickly grows to astronomical numbers. The result is a search that is too slow or never completes. “Heuristics” or “rules of thumb” can help prioritize choices that are more likely to reach a goal.Adversarial search is used for game-playing programs, such as chess or Go. It searches through a tree of possible moves and countermoves, looking for a winning position.

In the last 2 hours
In the last 8 hours
Earlier Today
Yesterday
Monday
Sunday

Leave a Reply

No comments:

Post a Comment

What Happens In a Mind That Can’t ‘See’ Mental Images 

Kristina Armitage/Quanta Magazine Two years ago, Sarah Shomstein realized she didn’t have a mind’s eye. The vision scientist was sitting in ...