Sunday, February 2, 2025

Generative AI In Education: Bridging The Skills Gap 

Amid the AI revolution, a critical gap is emerging that could determine the trajectory of economies and industries worldwide: the global skills gap. As workplaces rapidly adopt artificial intelligence and automation, education systems struggle to keep pace. However, a recent Global Student Survey from Chegg– which spans over 11,700 undergraduate students across 15 countries– sheds new light on the transformative potential of what is possible if generative AI is responsibly deployed in academic settings, particularly in higher education…….Continue reading…..

By: 

Source:  Due

.

Critics:

The Ai in education community has grown rapidly in the global north. Currently, there is much hype from venture capital, big tech and convinced open educationalists. Ai in education is a contested terrain. Some educationalists believe that Ai will remove the obstacle of “access to expertise”.

Advertisement

Others claim that education will be revolutionised with machines and their ability to understand natural language. While others are exploring how LLM’s “reasoning” might be improved. While in the global south, others see the Ai’s data processing and monitoring as a misguided attempt to address colonialism and apartheid, that that has inadvertently re-enforced a neo-liberal approach to education.

Ai companies that focus on education, are currently preoccupied with Generative artificial intelligence (GAI), although data science and data analytics is another popular educational theme. At present, there is little scientific consensus on what Ai is or how to classify and sub-categorize AI This has not hampered the growth of Ai in education systems, which are gathering data and then optimising models.

Ai offers scholars and students automatic assessment and feedback, predictions, instant machine translations, on-demand proof-reading and copy editing, intelligent tutoring or virtual assistants. The “generative-AI supply chain”, brings conversational coherence to the classroom, and automates the production of content.

Using categorisation, summaries and dialogue, Ai “intelligence” or “authority” is reinforced through anthropomorphism and the Eliza effect.Educational technology can be a powerful and effective assistant in a suitable setting. Computer companies are constantly updating their technology products. Some educationalists have suggested that Ai might automate procedural knowledge and expertise or even match or surpass human capacities on cognitive tasks.

They advocate for the integration of AI across the curriculum and the development of AI Literacy. Others are more skeptical as Ai faces an ethical challenge, where “fabricated responses” or “inaccurate information”, politely referred to as “hallucinations” are generated and presented as fact. Some remain curious about societies tendency to put their faith in engineering achievements, and the systems of power and privilege that leads towards determinist thinking.

While others see copyright infringement or the introduction of harm, division and other social impacts, and advocate resistance to Ai. Evidence is mounting that Ai written assessments are undetectable, which poses serious questions about the academic integrity of university assessments.Large language models (LLMs) take text as input data and then generate output text. LLMs are generated from billions of words and code that has been web-scraped by Ai companies or researchers.

LLM are often dependent on a huge text corpus that is extracted, sometimes without permission. LLMS are feats of engineering, that see text as tokens. The relationships between the tokens, allows LLM to predict the next word, and then the next, thus generating a meaningful sentence that has an appearance of thought and interactivity. This massive dataset creates a statistical reasoning machine, that does pattern recognition.

The LLM examines the relationships between tokens, generates probable outputs in response to a prompt, and completes a defined task, such as translating, editing, or writing.The output that is presented is a smoothed collection of words, that is normalized and predictable. However, the text corpora that LLMs draw on can be problematic, as outputs will reflect their stereotypes or biases of the people or culture whose content has been digitized.

The confident, but incorrect outputs are termed “hallucinations”. These plausible errors are not malfunctions but a consequence of the engineering decisions that inform the large language model. “Guardrails” offer to act as validators of the LLM output, prevent these errors, and safeguard accuracy .There are no fixes for so-called “hallucinations”, the “factually incorrect or nonsensical information that seems plausible Translation, summarization, information retrieval, conversational interactions are some of the complex language tasks that machines are expected to handle.

Tech for Teachers: AFP, U.S. Chaplains’ Initiative for La Union Elementary Education Defense Visual Information Distribution Service 11:29 Mon, 15 Apr 
‘Why we should invest more in tech education The Nation, Nigeria 07:32 Fri, 12 Apr 
In the last month

We Don’t Need No Digital EducationTech-Savvy Sweden Leads Push To Remove Screens From Schools Worldcrunch 16:28 Mon, 18 Mar

Leave a Reply

No comments:

Post a Comment

Mutiny At The Ethereum Foundation? Why ETH Holders Are Calling For A Wartime CEO 

Wikimedia   The Block On Wednesday morning, a mysterious X profile emerged calling itself the “Second Foundation” an apparent reference to a...