Pentagon’s draft AI ethics guidelines fight bias and rogue machines

[ad_1]

The draft demands equitable AI that avoids “unintended bias” in algorithms, such as racism or sexism. AI could lead to people being treated “unfairly,” the board said, even if they’re not necessarily in life-and-death situations. The board called on the military to ensure that its data sources were neutral, not just the code itself. Bias could be useful for targeting key combatants or minimizing civilian casualties, but not in some situations.

The documents also call for “governable” AI that can stop itself if it detects that it’s about to cause unnecessary harm and stop itself (or switch to a human operator) in time. This wouldn’t greenlight fully automated weapons, but it would reduce the chances of AI going rogue. Accordingly, the draft includes a call for “traceable” AI output that lets people see how a system reached its conclusion.

While the draft is promising, there’s still the challenge of implementing it in practice. It’s easy to promise more accountable and trustworthy AI, it’s another thing to ensure that every military branch follows those ideals with every project. As Defense One observed, though, the Department may have an advantage over tech companies in that it’s starting with a relatively blank slate. It doesn’t have to make exceptions for current AI projects or else rethink its existing strategy — the guidelines should be there from day one.

[ad_2]

Source link