Between the pandemic and the rise of generative AI, the education sector has been in a permanent state of flux over the past few years. For a time, online learning platforms were ascendant, meeting the moment when workplaces and schools alike went remote (and later, hybrid). With the public debut of ChatGPT in 2022, edtech companies—such as edX, which was one of the first online learning giants to launch a ChatGPT plugin…..Story continues…
By: Pavithra Mohan
Source: FastCompany
.
Critics:
The Ai in education community has grown rapidly in the global north. Currently, there is much hype from venture capital, big tech and convinced open educationalists. Ai in education is a contested terrain. Some educationalists believe that Ai will remove the obstacle of “access to expertise”.Others claim that education will be revolutionised with machines and their ability to understand natural language.
While others are exploring how LLM’s “reasoning” might be improved. While in the global south, others see the Ai’s data processing and monitoring as a misguided attempt to address colonialism and apartheid, that that has inadvertently re-enforced a neo-liberal approach to education.Ai companies that focus on education, are currently preoccupied with Generative artificial intelligence (GAI), although data science and data analytics is another popular educational theme.
At present, there is little scientific consensus on what Ai is or how to classify and sub-categorize AI This has not hampered the growth of Ai in education systems, which are gathering data and then optimising models. Ai offers scholars and students automatic assessment and feedback, predictions, instant machine translations, on-demand proof-reading and copy editing, intelligent tutoring or virtual assistants. The “generative-AI supply chain”, brings conversational coherence to the classroom, and automates the production of content.
Using categorisation, summaries and dialogue, Ai “intelligence” or “authority” is reinforced through anthropomorphism and the Eliza effect.Educational technology can be a powerful and effective assistant in a suitable setting. Computer companies are constantly updating their technology products. Some educationalists have suggested that Ai might automate procedural knowledge and expertise or even match or surpass human capacities on cognitive tasks. They advocate for the integration of AI across the curriculum and the development of AI Literacy.
Others are more skeptical as Ai faces an ethical challenge, where “fabricated responses” or “inaccurate information”, politely referred to as “hallucinations” are generated and presented as fact. Some remain curious about societies tendency to put their faith in engineering achievements, and the systems of power and privilege that leads towards determinist thinking.While others see copyright infringement or the introduction of harm, division and other social impacts, and advocate resistance to Ai.
Evidence is mounting that Ai written assessments are undetectable, which poses serious questions about the academic integrity of university assessments.Large language models (LLMs) take text as input data and then generate output text. LLMs are generated from billions of words and code that has been web-scraped by Ai companies or researchers. LLM are often dependent on a huge text corpus that is extracted, sometimes without permission. LLMS are feats of engineering, that see text as tokens.
The relationships between the tokens, allows LLM to predict the next word, and then the next, thus generating a meaningful sentence that has an appearance of thought and interactivity. This massive dataset creates a statistical reasoning machine, that does pattern recognition.The LLM examines the relationships between tokens, generates probable outputs in response to a prompt, and completes a defined task, such as translating, editing, or writing.
The output that is presented is a smoothed collection of words, that is normalized and predictable. However, the text corpora that LLMs draw on can be problematic, as outputs will reflect their stereotypes or biases of the people or culture whose content has been digitized. The confident, but incorrect outputs are termed “hallucinations”. These plausible errors are not malfunctions but a consequence of the engineering decisions that inform the large language model.
“Guardrails” offer to act as validators of the LLM output, prevent these errors, and safeguard accuracy .There are no fixes for so-called “hallucinations”, the “factually incorrect or nonsensical information that seems plausible Translation, summarization, information retrieval, conversational interactions are some of the complex language tasks that machines are expected to handle.
Leave a Reply