Dit zal pagina "Researchers Reduce Bias in aI Models while Maintaining Or Improving Accuracy"
verwijderen. Weet u het zeker?
Machine-learning models can fail when they attempt to make predictions for individuals who were underrepresented in the datasets they were trained on.
For instance, a model that forecasts the finest treatment alternative for somebody with a persistent disease may be trained using a dataset that contains mainly male patients. That design might make inaccurate predictions for female patients when deployed in a healthcare facility.
To enhance results, engineers can attempt stabilizing the training dataset by getting rid of information points till all subgroups are represented similarly. While dataset balancing is promising, it frequently needs getting rid of large amount of data, hurting the design's total efficiency.
MIT scientists developed a brand-new technique that identifies and gets rid of particular points in a training dataset that contribute most to a design's failures on minority subgroups. By removing far less datapoints than other methods, this strategy maintains the general accuracy of the model while enhancing its performance relating to underrepresented groups.
In addition, the technique can determine surprise sources of bias in a training dataset that does not have labels. Unlabeled information are much more prevalent than labeled data for many applications.
This method might likewise be combined with other methods to improve the fairness of machine-learning designs deployed in high-stakes circumstances. For example, it might someday assist ensure underrepresented clients aren't misdiagnosed due to a biased AI model.
"Many other algorithms that try to address this problem assume each datapoint matters as much as every other datapoint. In this paper, we are revealing that presumption is not true. There specify points in our dataset that are adding to this predisposition, and we can find those information points, remove them, and improve performance," states Kimia Hamidieh, an electrical engineering and computer technology (EECS) graduate trainee at MIT and co-lead author of a paper on this technique.
She composed the paper with co-lead authors Saachi Jain PhD '24 and fellow EECS graduate trainee Kristian Georgiev
Dit zal pagina "Researchers Reduce Bias in aI Models while Maintaining Or Improving Accuracy"
verwijderen. Weet u het zeker?