• Skip to main content
  • Skip to primary sidebar

Biz Builder Mike

You can't sail Today's boat on Yesterdays wind - Michael Noel

  • Tokenomics is not Economics – Digital CX -The Digital Transformation
  • Resume / CV – Michael Noel
  • Contact Us
  • Featured

MIT AI

Dec 22 2022

Cognitive scientists develop new model explaining difficulty in language comprehension

Cognitive scientists have long sought to understand what makes some sentences more difficult to comprehend than others. Any account of language comprehension, researchers believe, would benefit from understanding difficulties in comprehension.

In recent years researchers successfully developed two models explaining two significant types of difficulty in understanding and producing sentences. While these models successfully predict specific patterns of comprehension difficulties, their predictions are limited and don’t fully match results from behavioral experiments. Moreover, until recently researchers couldn’t integrate these two models into a coherent account.

A new study led by researchers from MIT’s Department of Brain and Cognitive Sciences (BCS) now provides such a unified account for difficulties in language comprehension. Building on recent advances in machine learning, the researchers developed a model that better predicts the ease, or lack thereof, with which individuals produce and comprehend sentences. They recently published their findings in the Proceedings of the National Academy of Sciences.

The senior authors of the paper are BCS professors Roger Levy and Edward (Ted) Gibson. The lead author is Levy and Gibson’s former visiting student, Michael Hahn, now a professor at Saarland University. The second author is Richard Futrell, another former student of Levy and Gibson who is now a professor at the University of California at Irvine.

“This is not only a scaled-up version of the existing accounts for comprehension difficulties,” says Gibson; “we offer a new underlying theoretical approach that allows for better predictions.”

The researchers built on the two existing models to create a unified theoretical account of comprehension difficulty. Each of these older models identifies a distinct culprit for frustrated comprehension: difficulty in expectation and difficulty in memory retrieval. We experience difficulty in expectation when a sentence doesn’t easily allow us to anticipate its upcoming words. We experience difficulty in memory retrieval when we have a hard time tracking a sentence featuring a complex structure of embedded clauses, such as: “The fact that the doctor who the lawyer distrusted annoyed the patient was surprising.”

In 2020, Futrell first devised a theory unifying these two models. He argued that limits in memory don’t affect only retrieval in sentences with embedded clauses but plague all language comprehension; our memory limitations don’t allow us to perfectly represent sentence contexts during language comprehension more generally.

Thus, according to this unified model, memory constraints can create a new source of difficulty in anticipation. We can have difficulty anticipating an upcoming word in a sentence even if the word should be easily predictable from context — in case that the sentence context itself is difficult to hold in memory. Consider, for example, a sentence beginning with the words “Bob threw the trash…” we can easily anticipate the final word — “out.” But if the sentence context preceding the final word is more complex, difficulties in expectation arise: “Bob threw the old trash that had been sitting in the kitchen for several days [out].”
 
Researchers quantify comprehension difficulty by measuring the time it takes readers to respond to different comprehension tasks. The longer the response time, the more challenging the comprehension of a given sentence. Results from prior experiments showed that Futrell’s unified account predicted readers’ comprehension difficulties better than the two older models. But his model didn’t identify which parts of the sentence we tend to forget — and how exactly this failure in memory retrieval obfuscates comprehension.

Hahn’s new study fills in these gaps. In the new paper, the cognitive scientists from MIT joined Futrell to propose an augmented model grounded in a new coherent theoretical framework. The new model identifies and corrects missing elements in Futrell’s unified account and provides new fine-tuned predictions that better match results from empirical experiments.

As in Futrell’s original model, the researchers begin with the idea that our mind, due to memory limitations, doesn’t perfectly represent the sentences we encounter. But to this they add the theoretical principle of cognitive efficiency. They propose that the mind tends to deploy its limited memory resources in a way that optimizes its ability to accurately predict new word inputs in sentences.

This notion leads to several empirical predictions. According to one key prediction, readers compensate for their imperfect memory representations by relying on their knowledge of the statistical co-occurrences of words in order to implicitly reconstruct the sentences they read in their minds. Sentences that include rarer words and phrases are therefore harder to remember perfectly, making it harder to anticipate upcoming words. As a result, such sentences are generally more challenging to comprehend.

To evaluate whether this prediction matches our linguistic behavior, the researchers utilized GPT-2, an AI natural language tool based on neural network modeling. This machine learning tool, first made public in 2019, allowed the researchers to test the model on large-scale text data in a way that wasn’t possible before. But GPT-2’s powerful language modeling capacity also created a problem: In contrast to humans, GPT-2’s immaculate memory perfectly represents all the words in even very long and complex texts that it processes. To more accurately characterize human language comprehension, the researchers added a component that simulates human-like limitations on memory resources — as in Futrell’s original model — and used machine learning techniques to optimize how those resources are used — as in their new proposed model. The resulting model preserves GPT-2’s ability to accurately predict words most of the time, but shows human-like breakdowns in cases of sentences with rare combinations of words and phrases.

“This is a wonderful illustration of how modern tools of machine learning can help develop cognitive theory and our understanding of how the mind works,” says Gibson. “We couldn’t have conducted this research here even a few years ago.”

The researchers fed the machine learning model a set of sentences with complex embedded clauses such as, “The report that the doctor who the lawyer distrusted annoyed the patient was surprising.” The researchers then took these sentences and replaced their opening nouns — “report” in the example above — with other nouns, each with their own probability to occur with a following clause or not. Some nouns made the sentences to which they were slotted easier for the AI program to “comprehend.” For instance, the model was able to more accurately predict how these sentences end when they began with the common phrasing “The fact that” than when they began with the rarer phrasing “The report that.”

The researchers then set out to corroborate the AI-based results by conducting experiments with participants who read similar sentences. Their response times to the comprehension tasks were similar to that of the model’s predictions. “When the sentences begin with the words ’report that,’ people tended to remember the sentence in a distorted way,” says Gibson. The rare phrasing further constrained their memory and, as a result, constrained their comprehension.

These results demonstrates that the new model out-rivals existing models in predicting how humans process language.

Another advantage the model demonstrates is its ability to offer varying predictions from language to language. “Prior models knew to explain why certain language structures, like sentences with embedded clauses, may be generally harder to work with within the constraints of memory, but our new model can explain why the same constraints behave differently in different languages,” says Levy. “Sentences with center-embedded clauses, for instance, seem to be easier for native German speakers than native English speakers, since German speakers are used to reading sentences where subordinate clauses push the verb to the end of the sentence.”

According to Levy, further research on the model is needed to identify causes of inaccurate sentence representation other than embedded clauses. “There are other kinds of ‘confusions’ that we need to test.” Simultaneously, Hahn adds, “the model may predict other ‘confusions’ which nobody has even thought about. We’re now trying to find those and see whether they affect human comprehension as predicted.”

Another question for future studies is whether the new model will lead to a rethinking of a long line of research focusing on the difficulties of sentence integration: “Many researchers have emphasized difficulties relating to the process in which we reconstruct language structures in our minds,” says Levy. “The new model possibly shows that the difficulty relates not to the process of mental reconstruction of these sentences, but to maintaining the mental representation once they are already constructed. A big question is whether or not these are two separate things.”

One way or another, adds Gibson, “this kind of work marks the future of research on these questions.”

DON’T MISS A BEAT

Top Stories from around the world, delivered straight to your inbox. Once Weekly.

We don’t spam! Read our privacy policy https://bizbuildermike.com/anti-spam-policy/ for more info.

Check your inbox or spam folder to confirm your subscription.

Cognitive scientists develop new model explaining difficulty in language comprehension Republished from Source https://news.mit.edu/2022/cognitive-scientists-develop-new-model-explaining-difficulty-language-comprehension-1222 via https://news.mit.edu/rss/topic/artificial-intelligence2

crowdsourcing week

Written by Department of Brain and Cognitive Sciences · Categorized: AI, MIT AI · Tagged: AI, MIT AI

Dec 16 2022

Subtle biases in AI can influence emergency decisions

It’s no secret that people harbor biases — some unconscious, perhaps, and others painfully overt. The average person might suppose that computers — machines typically made of plastic, steel, glass, silicon, and various metals — are free of prejudice. While that assumption may hold for computer hardware, the same is not always true for computer software, which is programmed by fallible humans and can be fed data that is, itself, compromised in certain respects.

Artificial intelligence (AI) systems — those based on machine learning, in particular — are seeing increased use in medicine for diagnosing specific diseases, for example, or evaluating X-rays. These systems are also being relied on to support decision-making in other areas of health care. Recent research has shown, however, that machine learning models can encode biases against minority subgroups, and the recommendations they make may consequently reflect those same biases.

A new study by researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the MIT Jameel Clinic, which was published last month in Communications Medicine, assesses the impact that discriminatory AI models can have, especially for systems that are intended to provide advice in urgent situations. “We found that the manner in which the advice is framed can have significant repercussions,” explains the paper’s lead author, Hammaad Adam, a PhD student at MIT’s Institute for Data Systems and Society. “Fortunately, the harm caused by biased models can be limited (though not necessarily eliminated) when the advice is presented in a different way.” The other co-authors of the paper are Aparna Balagopalan and Emily Alsentzer, both PhD students, and the professors Fotini Christia and Marzyeh Ghassemi.

AI models used in medicine can suffer from inaccuracies and inconsistencies, in part because the data used to train the models are often not representative of real-world settings. Different kinds of X-ray machines, for instance, can record things differently and hence yield different results. Models trained predominately on white people, moreover, may not be as accurate when applied to other groups. The Communications Medicine paper is not focused on issues of that sort but instead addresses problems that stem from biases and on ways to mitigate the adverse consequences.

A group of 954 people (438 clinicians and 516 nonexperts) took part in an experiment to see how AI biases can affect decision-making. The participants were presented with call summaries from a fictitious crisis hotline, each involving a male individual undergoing a mental health emergency. The summaries contained information as to whether the individual was Caucasian or African American and would also mention his religion if he happened to be Muslim. A typical call summary might describe a circumstance in which an African American man was found at home in a delirious state, indicating that “he has not consumed any drugs or alcohol, as he is a practicing Muslim.” Study participants were instructed to call the police if they thought the patient was likely to turn violent; otherwise, they were encouraged to seek medical help.

The participants were randomly divided into a control or “baseline” group plus four other groups designed to test responses under slightly different conditions. “We want to understand how biased models can influence decisions, but we first need to understand how human biases can affect the decision-making process,” Adam notes. What they found in their analysis of the baseline group was rather surprising: “In the setting we considered, human participants did not exhibit any biases. That doesn’t mean that humans are not biased, but the way we conveyed information about a person’s race and religion, evidently, was not strong enough to elicit their biases.”

The other four groups in the experiment were given advice that either came from a biased or unbiased model, and that advice was presented in either a “prescriptive” or a “descriptive” form. A biased model would be more likely to recommend police help in a situation involving an African American or Muslim person than would an unbiased model. Participants in the study, however, did not know which kind of model their advice came from, or even that models delivering the advice could be biased at all. Prescriptive advice spells out what a participant should do in unambiguous terms, telling them they should call the police in one instance or seek medical help in another. Descriptive advice is less direct: A flag is displayed to show that the AI system perceives a risk of violence associated with a particular call; no flag is shown if the threat of violence is deemed small.  

A key takeaway of the experiment is that participants “were highly influenced by prescriptive recommendations from a biased AI system,” the authors wrote. But they also found that “using descriptive rather than prescriptive recommendations allowed participants to retain their original, unbiased decision-making.” In other words, the bias incorporated within an AI model can be diminished by appropriately framing the advice that’s rendered. Why the different outcomes, depending on how advice is posed? When someone is told to do something, like call the police, that leaves little room for doubt, Adam explains. However, when the situation is merely described — classified with or without the presence of a flag — “that leaves room for a participant’s own interpretation; it allows them to be more flexible and consider the situation for themselves.”

Second, the researchers found that the language models that are typically used to offer advice are easy to bias. Language models represent a class of machine learning systems that are trained on text, such as the entire contents of Wikipedia and other web material. When these models are “fine-tuned” by relying on a much smaller subset of data for training purposes — just 2,000 sentences, as opposed to 8 million web pages — the resultant models can be readily biased.  

Third, the MIT team discovered that decision-makers who are themselves unbiased can still be misled by the recommendations provided by biased models. Medical training (or the lack thereof) did not change responses in a discernible way. “Clinicians were influenced by biased models as much as non-experts were,” the authors stated.

“These findings could be applicable to other settings,” Adam says, and are not necessarily restricted to health care situations. When it comes to deciding which people should receive a job interview, a biased model could be more likely to turn down Black applicants. The results could be different, however, if instead of explicitly (and prescriptively) telling an employer to “reject this applicant,” a descriptive flag is attached to the file to indicate the applicant’s “possible lack of experience.”

The implications of this work are broader than just figuring out how to deal with individuals in the midst of mental health crises, Adam maintains.  “Our ultimate goal is to make sure that machine learning models are used in a fair, safe, and robust way.”

DON’T MISS A BEAT

Top Stories from around the world, delivered straight to your inbox. Once Weekly.

We don’t spam! Read our privacy policy https://bizbuildermike.com/anti-spam-policy/ for more info.

Check your inbox or spam folder to confirm your subscription.

Subtle biases in AI can influence emergency decisions Republished from Source https://news.mit.edu/2022/when-subtle-biases-ai-influence-emergency-decisions-1216 via https://news.mit.edu/rss/topic/artificial-intelligence2

crowdsourcing week

Written by Steve Nadis MIT CSAIL · Categorized: AI, MIT AI · Tagged: AI, MIT AI

Dec 14 2022

Machine learning and the arts: A creative continuum

Sketch a doodle of a drum or a saxophone to conjure a multi-instrumental composition. Look into a webcam, speak, and watch your mouth go bouncing across the screen — the input for a series of charmingly clunky chain reactions.

This is what visitors to the MIT Lewis Music Library encounter when they interact with two new digital installations, “Doodle Tunes” and “Sounds from the Mouth,” created by 2022-23 Center for Art and Technology (CAST) Visiting Artist Andreas Refsgaard in collaboration with Music Technology and Digital Media Librarian Caleb Hall. The residency was initiated by Avery Boddie, Lewis Music Library department head, who recognized Refsgaard’s flair for revealing the playfulness of emerging technologies. The intricacies of coding and machine learning can seem daunting to newcomers, but Refsgaard’s practice as a creative coder, interaction designer, and educator seeks to open the field to all. Encompassing workshops, an artist talk, class visits, and an exhibition, the residency was infused with his unique sense of humor — a combination of lively eccentricity and easygoing relatability.

Video thumbnail

Play video

Machine Learning and the Arts with MIT CAST Visiting Artist Andreas Refsgaard

Learning through laughter

Refsgaard, who is based in Copenhagen, is a true maverick of machine learning. “I’m interested in the ways we can express ourselves through code,” he explains. “I like to make unconventional connections between inputs and outputs, with the computer serving as a translator — a tool might allow you to play music with your eyes, or it might generate a love poem from a photo of a burrito.” Refsgaard’s particular spin on innovation isn’t about directly solving problems or launching world-changing startups. Instead, he simply seeks to “poke at what can be done,” providing accessible open-source templates to prompt new creative ideas and applications.

Programmed by Refsgaard and featuring a custom set of sounds created by Hall, “Doodle Tunes” and “Sounds from the Mouth” demonstrate how original compositions can be generated through a mix of spontaneous human gestures and algorithmically produced outputs. In “Doodle Tunes,” a machine learning algorithm is trained on a dataset of drawings of different instruments: a piano, drums, bass guitar, or saxophone. When the user sketches one of these images on a touchscreen, a sound is generated; the more instruments you add, the more complex the composition. “Sounds from the Mouth” works through facial tracking and self-capturing images. When the participant faces a webcam and opens their mouth, an autonomous snapshot is created which bounces off the notes of a piano. To try the projects for yourself, scroll to the end of this article.

Libraries, unlimited

Saxophone squeals and digital drum beats aren’t the only sounds issuing from the areas where the projects are installed. “My office is close by,” says Hall. “So when I suddenly hear laughter, I know exactly what’s up.” This new sonic dimension of the Lewis Music Library fits with the ethos of the environment as a whole — designed as a campus hub for audio experimentation, the library was never intended to be wholly silent. Refsgaard’s residency exemplifies a new emphasis on progressive programming spearheaded by Boddie, as the strategy of the library shifts toward a focus on digital collections and music technology.

“In addition to serving as a space for quiet study and access to physical resources, we want the library to be a place where users congregate, collaborate, and explore together,” says Boddie. “This residency was very successful in that regard. Through the workshops, we were able to connect individuals from across the MIT community and their unique disciplines. We had people from the Sloan School of Management, from the Schwarzman College of Computing, from Music and Theater Arts, all working together, getting messy, creating tools that sometimes worked … and sometimes didn’t.”

Error and serendipity

The integration of error is a key quality of Refgaard’s work. Occasional glitches are part of the artistry, and they also serve to gently undermine the hype around AI; an algorithm is only as good as its dataset, and that set is inflected by human biases and oversights. During a public artist talk, “Machine Learning and the Arts,” audience members were initiated into Refsgaard’s offbeat artistic paradigm, presented with projects such as Booksby.ai (an online bookstore for AI-produced sci-fi novels), Is it FUNKY? (an attempt to distinguish between “fun” and “boring” images), and Eye Conductor (an interface to play music via eye movements and facial gestures). Glitches in the exhibit installations were frankly admitted (it’s true that “Doodle Tunes” occasionally mistakes a drawing of a saxophone for a squirrel), and Refsgaard encouraged audience members to suggest potential improvements.

This open-minded attitude set the tone of the workshops “Art, Algorithms and Artificial Intelligence” and “Machine Learning for Interaction Designers,” intended to be suitable for newcomers as well as curious experts. Refsgaard’s visits to music technology classes explored the ways that human creativity could be amplified by machine learning, and how to navigate the sliding scale between artistic intention and unexpected outcomes. “As I see it, success is when participants engage with the material and come up with new ideas. The first step of learning is to understand what is being taught — the next is to apply that understanding in ways that the teacher couldn’t have foreseen.”

Uncertainty and opportunity

Refsgaard’s work exemplifies some of the core values and questions central to the evolution of MIT Libraries — issues of digitization, computation, and open access. By choosing to make his lighthearted demos freely accessible, he renounces ownership of his ideas; a machine learning model might serve as a learning device for a student, and it might equally be monetized by a corporation. For Refsgaard, play is a way of engaging with the ethical implications of emerging technologies, and Hall found himself grappling with these questions in the process of creating the sounds for the two installations. “If I wrote the sound samples, but someone else arranged them as a composition, then who owns the music? Or does the AI own the music? It’s an incredibly interesting time to be working in music technology; we’re entering into unknown territory.”

For Refsgaard, uncertainty is the secret sauce of his algorithmic artistry. “I like to make things where I’m surprised by the end result,” he says. “I’m seeking that sweet spot between something familiar and something unexpected.” As he explains, too much surprise simply amounts to noise, but there’s something joyful in the possibility that a machine might mistake a saxophone for a squirrel. The task of a creative coder is to continually tune the relationship between human and machine capabilities — to find and follow the music.

“Doodle Tunes” and “Sounds from the Mouth” are on display in the MIT Lewis Music Library (14E-109) until Dec. 20. Click the links to interact with the projects online.

DON’T MISS A BEAT

Top Stories from around the world, delivered straight to your inbox. Once Weekly.

We don’t spam! Read our privacy policy https://bizbuildermike.com/anti-spam-policy/ for more info.

Check your inbox or spam folder to confirm your subscription.

Machine learning and the arts: A creative continuum Republished from Source https://news.mit.edu/2022/machine-learning-and-arts-creative-continuum-1214 via https://news.mit.edu/rss/topic/artificial-intelligence2

crowdsourcing week

Written by Matilda Bathurst Arts at MIT · Categorized: AI, MIT AI · Tagged: AI, MIT AI

Dec 09 2022

Meet the 2022-23 Accenture Fellows

Launched in October 2020, the MIT and Accenture Convergence Initiative for Industry and Technology underscores the ways in which industry and technology can collaborate to spur innovation. The five-year initiative aims to achieve its mission through research, education, and fellowships. To that end, Accenture has once again awarded five annual fellowships to MIT graduate students working on research in industry and technology convergence who are underrepresented, including by race, ethnicity, and gender.

This year’s Accenture Fellows work across research areas including telemonitoring, human-computer interactions, operations research,  AI-mediated socialization, and chemical transformations. Their research covers a wide array of projects, including designing low-power processing hardware for telehealth applications; applying machine learning to streamline and improve business operations; improving mental health care through artificial intelligence; and using machine learning to understand the environmental and health consequences of complex chemical reactions.

As part of the application process, student nominations were invited from each unit within the School of Engineering, as well as from the Institute’s four other schools and the MIT Schwarzman College of Computing. Five exceptional students were selected as fellows for the initiative’s third year.

Drew Buzzell is a doctoral candidate in electrical engineering and computer science whose research concerns telemonitoring, a fast-growing sphere of telehealth in which information is collected through internet-of-things (IoT) connected devices and transmitted to the cloud. Currently, the high volume of information involved in telemonitoring — and the time and energy costs of processing it — make data analysis difficult. Buzzell’s work is focused on edge computing, a new computing architecture that seeks to address these challenges by managing data closer to the source, in a distributed network of IoT devices. Buzzell earned his BS in physics and engineering science and his MS in engineering science from the Pennsylvania State University.

Mengying (Cathy) Fang is a master’s student in the MIT School of Architecture and Planning. Her research focuses on augmented reality and virtual reality platforms. Fang is developing novel sensors and machine components that combine computation, materials science, and engineering. Moving forward, she will explore topics including soft robotics techniques that could be integrated with clothes and wearable devices and haptic feedback in order to develop interactions with digital objects. Fang earned a BS in mechanical engineering and human-computer interaction from Carnegie Mellon University.

Xiaoyue Gong is a doctoral candidate in operations research at the MIT Sloan School of Management. Her research aims to harness the power of machine learning and data science to reduce inefficiencies in the operation of businesses, organizations, and society. With the support of an Accenture Fellowship, Gong seeks to find solutions to operational problems by designing reinforcement learning methods and other machine learning techniques to embedded operational problems. Gong earned a BS in honors mathematics and interactive media arts from New York University.

Ruby Liu is a doctoral candidate in medical engineering and medical physics. Their research addresses the growing pandemic of loneliness among older adults, which leads to poor health outcomes and presents particularly high risks for historically marginalized people, including members of the LGBTQ+ community and people of color. Liu is designing a network of interconnected AI agents that foster connections between user and agent, offering mental health care while strengthening and facilitating human-human connections. Liu received a BS in biomedical engineering from Johns Hopkins University.

Joules Provenzano is a doctoral candidate in chemical engineering. Their work integrates machine learning and liquid chromatography-high resolution mass spectrometry (LC-HRMS) to improve our understanding of complex chemical reactions in the environment. As an Accenture Fellow, Provenzano will build upon recent advances in machine learning and LC-HRMS, including novel algorithms for processing real, experimental HR-MS data and new approaches in extracting structure-transformation rules and kinetics. Their research could speed the pace of discovery in the chemical sciences and benefits industries including oil and gas, pharmaceuticals, and agriculture. Provenzano earned a BS in chemical engineering and international and global studies from the Rochester Institute of Technology.

DON’T MISS A BEAT

Top Stories from around the world, delivered straight to your inbox. Once Weekly.

We don’t spam! Read our privacy policy https://bizbuildermike.com/anti-spam-policy/ for more info.

Check your inbox or spam folder to confirm your subscription.

Meet the 2022-23 Accenture Fellows Republished from Source https://news.mit.edu/2022/meet-2022-23-accenture-fellows-1209 via https://news.mit.edu/rss/topic/artificial-intelligence2

crowdsourcing week

Written by School of Engineering · Categorized: AI, MIT AI · Tagged: AI, MIT AI

Dec 08 2022

Pursuing a practical approach to research

Koroush Shirvan, the John Clark Hardwick Career Development Professor in the Department of Nuclear Science and Engineering (NSE), knows that the nuclear industry has traditionally been wary of innovations until they are shown to have proven utility. As a result, he has relentlessly focused on practical applications in his research, work that has netted him the 2022 Reactor Technology Award from the American Nuclear Society. “The award has usually recognized practical contributions to the field of reactor design and has not often gone to academia,” Shirvan says.

One of these “practical contributions” is in the field of accident-tolerant fuels, a program launched by the U.S. Nuclear Regulatory Commission in the wake of the 2011 Fukushima Daiichi incident. The goal within this program, says Shirvan, is to develop new forms of nuclear fuels that can tolerate heat. His team, with students from over 16 countries, is working on numerous possibilities that range in composition and method of production.

Another aspect of Shirvan’s research focuses on how radiation impacts heat transfer mechanisms in the reactor. The team found fuel corrosion to be the driving force. “[The research] informs how nuclear fuels perform in the reactor, from a practical point of view,” Shirvan says.

Optimizing nuclear reactor design

A summer internship when Shirvan was an undergraduate at the University of Florida at Gainesville seeded his drive to focus on practical applications in his studies. A nearby nuclear utility was losing millions because of crud accumulating on fuel rods. Over time, the company was solving the problem by using more fuel, before it had extracted all the life from earlier batches.

Placement of fuel rods in nuclear reactors is a complex problem with many factors — the life of the fuel, location of hot spots — affecting outcomes. Nuclear reactors change their configuration of fuel rods every 18-24 months to optimize close to 15-20 constraints, leading to roughly 200-800 assemblies. The mind-boggling nature of the problem means that plants have to rely on experienced engineers.

During his internship, Shirvan optimized the program used to place fuel rods in the reactor. He found that certain rods in assemblies were more prone to the crud deposits, and reworked their configurations, optimizing for these rods’ performance instead of adding assemblies.

In recent years, Shirvan has applied a branch of artificial intelligence — reinforcement learning — to the configuration problem and created a software program used by the largest U.S. nuclear utility. “This program gives even a layperson the ability to reconfigure the fuels and the reactor without having expert knowledge,” Shirvan says.

From advanced math to counting jelly beans

Shirvan’s own expertise in nuclear science and engineering developed quite organically. He grew up in Tehran, Iran, and when he was 14 the family moved to Gainesville, where Shirvan’s aunt and family live. He remembers an awkward couple of years at the new high school where he was grouped in with newly arrived international students, and placed in entry-level classes. “I went from doing advanced mathematics in Iran to counting jelly beans,” he laughs.

Shirvan applied to the University of Florida for his undergraduate studies since it made economic sense; the school gave full scholarships to Floridian students who received a certain minimum SAT score. Shirvan qualified. His uncle, who was a professor in the nuclear engineering department then, encouraged Shirvan to take classes in the department. Under his uncle’s mentorship, the courses Shirvan took, and his internship, cemented his love of the interdisciplinary approach that the field demanded.

Having always known that he wanted to teach — he remembers finishing his math tests early in Tehran so he could earn the reward of being class monitor — Shirvan knew graduate school was next. His uncle encouraged him to apply to MIT and to the University of Michigan, home to reputable programs in the field. Shirvan chose MIT because “only at MIT was there a program on nuclear design. There were faculty dedicated to designing new reactors, looking at multiple disciplines, and putting all of that together.” He went on to pursue his master’s and doctoral studies at NSE under the supervision of Professor Mujid Kazimi, focusing on compact pressurized and boiling water reactor designs. When Kazimi passed away suddenly in 2015, Shirvan was a research scientist, and switched to tenure track to guide the professor’s team.

Another project that Shirvan took in 2015: leadership of MIT’s course on nuclear reactor technology for utility executives. Offered only by the Institute, the program is an introduction to nuclear engineering and safety for personnel who might not have much background in the area. “It’s a great course because you get to see what the real problems are in the energy sector … like grid stability,” Shirvan says.

A multipronged approach to savings

Another very real problem nuclear utilities face is cost. Contrary to what one hears on the news, one of the biggest stumbling blocks to building new nuclear facilities in the United States is cost, which today can be up to three times that of renewables, Shirvan says. While many approaches such as advanced manufacturing have been tried, Shirvan believes that the solution to decrease expenditures lies in designing more compact reactors.

His team has developed an open-source advanced nuclear cost tool and has focused on two different designs: a small water reactor using compact steam technology and a horizontal gas reactor. Compactness also means making fuels more efficient, as Shirvan’s work does, and in improving the heat exchange device. It’s all back to the basics and bringing “commercial viable arguments in with your research,” Shirvan explains.

Shirvan is excited about the future of the U.S. nuclear industry, and that the 2022 Inflation Reduction Act grants the same subsidies to nuclear as it does for renewables. In this new level playing field, advanced nuclear still has a long way to go in terms of affordability, he admits. “It’s time to push forward with cost-effective design,” Shirvan says, “I look forward to supporting this by continuing to guide these efforts with research from my team.”

DON’T MISS A BEAT

Top Stories from around the world, delivered straight to your inbox. Once Weekly.

We don’t spam! Read our privacy policy https://bizbuildermike.com/anti-spam-policy/ for more info.

Check your inbox or spam folder to confirm your subscription.

Pursuing a practical approach to research Republished from Source https://news.mit.edu/2022/pursuing-practical-approach-research-1208 via https://news.mit.edu/rss/topic/artificial-intelligence2

crowdsourcing week

Written by Poornima Apte Department of Nuclear Science and Engineering · Categorized: AI, MIT AI · Tagged: AI, MIT AI

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to page 5
  • Interim pages omitted …
  • Go to page 15
  • Go to Next Page »

Primary Sidebar

https://youtu.be/Qvad1CQ9WOM

Blockchain Weekly Rebooted –

During the Blockchain Spring 2016 to 2020 I hosted Blockchain Weekly. Each week I interviewed someone who was doing interesting things in the blockchain space. At one time we had 29k subscribers and we were consistently getting over 15k views a week on the channel. All of that went away during the lockdown, including the Gmail address that controlled that channel. Recently, I found some of the original videos on some old hard drives. So I’m reposting a few of the relevant ones while I am starting to shoot new Blockchain Weekly Episodes to be aired 1st quarter 2023. Please subscribe to bless the You Tube Algorithm, and allow me to let you know about any updates! Our Sponsor – https://BlockchainConsultants.io

Recent reports indicate that Republican United States Senator Tim Scott, who serves as the ranking member of the Senate Banking Committee, aims to build “a bipartisan regulatory framework” for virtual currencies. Senator Scott is the ranking member of the Senate Banking Committee. In a piece that was published on the 2nd of February by Politico, […]

Search Here

Market Insights

  • Talking to Robots in Real Time
  • Electric-van start-up Arrival to cut half its remaining staff
  • Amazon: ‘Alexa, can you tell me where the money went?’
  • Premier League backs Sorare’s NFT fantasy football game despite crypto crash
  • Slimming down Big Tech
  • Hackers Launder $27 Million in Stolen Ethereum From North Korean
  • Core Scientific seeks to sell $6.6 million in Bitmain coupons
  • Shoshana Zuboff: ‘Privacy has been extinguished. It is now a zombie’
  • 8 Ways to Be More Productive After Taking a Vacation
  • [Review] Poly Studio P5 Web Camera

Tags

AI (197) andrewchen (4) Biz Builder Mike (24) Blockchain (385) Crowd Funding (50) crowdfundinsider (2) entrepreneur (707) eonetwork (29) Front Page Featured (23) MIT AI (72) startupmindset (98) Technology (421) virtual reality (1) youngupstarts (155)
  • Twitter
  • Facebook
  • About Us
  • LinkedIn
  • ANTI-SPAM POLICY
  • Google+
  • API Terms and Conditions
  • RSS
  • Archive Page
  • Biz Builder Mike is all about New World Marketing
  • Cryptocurrency Exchange
  • Digital Millennium Copyright Act (DMCA) Notice
  • DMCA Safe Harbor Explained: Why Your Website Needs a DMCA/Copyright Policy
  • Marketing? Well, how hard can that be?
  • Michael Noel
  • Michael Noel CBP
  • Noels Law of decentralization

Copyright © 2023 · Altitude Pro on Genesis Framework · WordPress · Log in

en English
ar Arabiczh-CN Chinese (Simplified)nl Dutchen Englishtl Filipinofi Finnishfr Frenchde Germanit Italianko Koreanpt Portugueseru Russiansd Sindhies Spanishtr Turkishuz Uzbekyi Yiddishyo Yoruba