close
close

AI and the rise of crime crime: Who bears legal responsibility?

With AI systems that make autonomous decisions, existing laws have difficulties in defining accountability – urgent questions about guilt, regulation and rights in machine age. | Photo loan: Alfieri/Getty Images

We categorized crimes in every shade of human ability: employees, thefts on the workers and every misstep in between. But what if the perpetrator is not human? Enter the next border: gray crystals, a cloudy middle ground that was not born from flesh and blood, but from algorithms and silicon-a valley from rare gray metals and moral ambiguities. Welcome to the Age of the AI ​​break, in which the question is not in the question, but what is not.

This new genre of gray crime requires a brand new legal framework. But where should I start? Isaac Asimovs (an American Science Fiction author) Three laws of the robotics offered a framework to contain the dangers of the AI:

1) A robot cannot hurt a person or allow a person to harm a person through inactivity.

2) A robot must comply with human orders, unless such orders come into conflict with the first law. And

3) A robot must protect its own existence as long as such protection does not contradict the first or second law.

Read also | Not in panic about AI art in Ghibli-style. Know what is really at stake

But even Asimov recognized her borders and later introduced a Zeroeth law: “A robot cannot harm mankind or let humanity harm.” However, such laws are sufficient for companies that develop through self -learning skills that go beyond the intentions of their creator? AI's self -learning skills make it difficult for the legal accountability: Adaptation through experience and cuts from the original code decreases the influence of the creators when autonomy grows. This stipulates the presence of regulations and restrictions on violations of the employees.

Consequences for breaking come with restrictions. Consider legal proceedings in the not too distant future: a self -driving car accident. It is a stylized trolley problem, only they don't pull the lever. A computer system is. It is one thing when a person who drives a car had to make a decision between her life and another, but how can you encode AI to make this decision? A relevant legal concern throws up: Who is culpable? The relatively new artificial intelligence (practically a minor) or the parent company?

When does the AI ​​change from property? Is eighteen years of existence universal the sign of adulthood? Instead of measuring the autonomy of the AI ​​in years, should a test be used, for example, the Turing test to create a limitation period? The Turing test was created in 1949 to define when a computer's ability to show intelligent behavior that corresponds to that of a person. How the simplest forms of artificial intelligence seem to go far beyond that of a person, is this decades still tested as a reasonable benchmark for AI's autonomy? As the code progresses so far that the parent gives control and responsibility for the mistakes of your creations, we reach the cliff of historical priority. Gray collar crime tries here, which opposes categorization.

Star Trek deals with a puzzle similar to data, the Android, an attempt to determine his autonomy in the episode as a man's measure. With a three -part test that evaluates the intelligence, self -confidence and awareness of the data, his right to vote was confirmed, which questioned our definition of a machine. If AI successfully achieves a similar status to the data, our legal systems must develop further.

Is the rise of AI a modern Prometheus that imitates the malignant spiral of Frankenstein and its monster and has to fast, unshakable progress? A warning story about the ethical limits of science led too far. Is Victor culpably triggered by his creation for hell? Who is also responsible for the violations of AI? Will the AI ​​creators protect this burden forever, or will there be a moment when Ki, like Frankenstein's Monster, grows out of his Nascency?

Read also | Can Ai capture Miyazaki's soul? Openais GPT-4O tests the limits of the animation

A new problem corresponds to how and where the fault is to be blamed. However, moral guilt is not the same as legal guilt – legal guilt requires regulations for the creators of the software. These restrictions can only be created and enforced if the AI ​​has to pass a strict moral, ethical or social test in order to describe it as a figure that is able to assign guilt. Just as the FDA stamps its approval for food and drugs, an institution must exist to stamp its approval for artificial intelligence software, and it either develops sufficiently to be sufficiently developed in court or elementary enough so that your engineers are to blame. But who creates these regulations? And who are the people who regulate actions? Nobody yet.

Gray-collar crime is not just a legal challenge. It is a philosophical. Are we ready to pursue crimes that are committed that blur the border between tools and entity? The future requires answers that are as complex as the systems that we unleashed. With the words of Mary Shelley (English writer): “You are my creator, but I'm your master.” Now it is up to us to decide what justice looks like in the age of AI and its various grayscale.

Ezri Rohatgi is Abitur of the High School Senior from San Diego, California. She is interested in studying international human rights law, with a focus on border conflicts and postcolonial exploitation.

Leave a Comment