Tuesday, June 18, 2024

Should I Hire An AI Consultant?


Create Labs

As generative AI rose in prominence this year, Jayesh Gadewar had a thought: We can use this to leapfrog our competitors. He’s the cofounder of Scrut Automation, a startup that builds compliance and security software for businesses, but as far as AI, they just didn’t have the expertise. So he did what many other leaders are now doing: He hired an AI consultant.

“Generative AI is funny in that the basic capabilities you get for your $20 a month for ChatGPT Plus are pretty impressive,” says Tom Davenport, professor of information technology and management at Babson College and a senior advisor at Deloitte’s AI practice, “but to customize it is technically challenging.”

That’s why, since OpenAI’s chart-topping chatbot landed a year ago, a new breed of specialist has risen to help companies harness AI’s game-changing wizardry…..Continue reading

BY LIZ BRODY

Source: Should I Hire an AI Consultant? | Entrepreneur

.

Critics:

Machine learning is the study of programs that can improve their performance on a given task automatically. It has been a part of AI from the beginning. There are several kinds of machine learning. Unsupervised learning analyzes a stream of data and finds patterns and makes predictions without any other guidance. 

Supervised learning requires a human to label the input data first, and comes in two main varieties: classification (where the program must learn to predict what category the input belongs in) and regression (where the program must deduce a numeric function based on numeric input).

In reinforcement learning the agent is rewarded for good responses and punished for bad ones. The agent learns to choose responses that are classified as “good”. Transfer learning is when the knowledge gained from one problem is applied to a new problem. Deep learning is a type of machine learning that runs inputs through biologically inspired artificial neural networks for all of these types of learning. 

Computational learning theory can assess learners by computational complexity, by sample complexity (how much data is required), or by other notions of optimizationNatural language processing (NLP) allows programs to read, write and communicate in human languages such as English. Specific problems include speech recognitionspeech synthesismachine translationinformation extractioninformation retrieval and question answering

Early work, based on Noam Chomsky‘s generative grammar and semantic networks, had difficulty with word-sense disambiguatio unless restricted to small domains called “micro-worlds” (due to the common sense knowledge problem). Modern deep learning techniques for NLP include word embedding (how often one word appears near another), transformers (which finds patterns in text), and others. 

In 2019, generative pre-trained transformer (or “GPT”) language models began to generate coherent text, and by 2023 these models were able to get human-level scores on the bar examSATGRE, and many other real-world applications. Feature detection (pictured: edge detection) helps AI compose informative abstract structures out of raw data. Machine perception is the ability to use input from sensors (such as cameras, microphones, wireless signals, active lidar, sonar, radar, and tactile sensors) to deduce aspects of the world. Computer vision is the ability to analyze visual input. 

The field includes speech recognition,image classification, facial recognitionobject recognition, and robotic perceptionDeep learning uses several layers of neurons between the network’s inputs and outputs. The multiple layers can progressively extract higher-level features from the raw input. For example, in image processing, lower layers may identify edges, while higher layers may identify the concepts relevant to a human such as digits or letters or faces.

 Deep learning has drastically improved the performance of programs in many important subfields of artificial intelligence, including computer visionspeech recognitionimage classification and others. The reason that deep learning performs so well in so many applications is not known as of 2023.The sudden success of deep learning in 2012–2015 did not occur because of some new discovery or theoretical breakthrough (deep neural networks and backpropagation had been described by many people, as far back as the 1950s) but because of two factors:

The incredible increase in computer power (including the hundred-fold increase in speed by switching to GPUs) and the availability of vast amounts of training data, especially the giant curated datasets used for benchmark testing, such as ImageNetAI and machine learning technology is used in most of the essential applications of the 2020s, including: search engines (such as Google Search), targeting online advertisementsrecommendation systems (offered by NetflixYouTube or Amazon), driving internet traffic, targeted advertising (AdSenseFacebook), virtual assistants

 (such as Siri or Alexa),autonomous vehicles (including dronesADAS and self-driving cars), automatic language translation (Microsoft TranslatorGoogle Translate), facial recognition (Apple‘s Face ID or Microsoft‘s DeepFace and Google‘s FaceNet) and image labeling (used by FacebookApple‘s iPhoto and TikTok).

There are also thousands of successful AI applications used to solve specific problems for specific industries or institutions. In a 2017 survey, one in five companies reported they had incorporated “AI” in some offerings or processes. A few examples are energy storage,medical diagnosis, military logistics, applications that predict the result of judicial decisions, foreign policy, or supply chain management.

superintelligence is a hypothetical agent that would possess intelligence far surpassing that of the brightest and most gifted human mind. If research into artificial general intelligence produced sufficiently intelligent software, it might be able to reprogram and improve itself. The improved software would be even better at improving itself, leading to what I. J. Good called an “intelligence explosion” and Vernor Vinge called a “singularity“.

However, technologies can’t improve exponentially indefinitely, and typically follow an S-shaped curve, slowing when they reach the physical limits of what the technology can do. Robot designer Hans Moravec, cyberneticist Kevin Warwick, and inventor Ray Kurzweil have predicted that humans and machines will merge in the future into cyborgs that are more capable and powerful than either. This idea, called transhumanism, has roots in Aldous Huxley and Robert Ettinger.

Edward Fredkin argues that “artificial intelligence is the next stage in evolution”, an idea first proposed by Samuel Butler‘s “Darwin among the Machines” as far back as 1863, and expanded upon by George Dyson in his book of the same name in 1998.

Related contents:

 

No comments:

Post a Comment

5 Habits of Successful People 

Shutterstock.com We’re all creatures of habit: We get up at the same time, go to work, have lunch at a certain time, go home at night and do...