AI Can Write Code Like People-Bullets and All

Some software developers now allows artificial intelligence help write their number. They find that AI also has human-like flaws.

Last June, GitHub, the company has a Microsoft which provides tools to help deal with and interact with the code, was released a beta type of software that uses AI to support developers. Start typing a command, a secret question, or a request to the API, with the program, called Police, You think your goal is to write down the rest.

Alex Naka, a commercial scientist at the biotech company who signed up to test Copilot, says the program could be very useful, and has changed the way it works. “It allows me to spend less time browsing to view API scripts or Stack Overflow models,” he says. “It seems like my job has gone from being a code maker to being a discriminator.”

But Naka has found that errors can enter his code in a variety of ways. “Sometimes I’ve missed some hidden mistakes when I get one of his thoughts,” he says. “And it can be difficult to follow this, probably because it seems to make mistakes that are not as successful for me as I would have been.”

The risks of creating the wrong AI codes can be surprisingly large. Researchers at NYU recently an analysis number generated by Copilot and found that, in some cases where security is important, the rules contain 40% of the time limitations.

The figure is “a little higher than I thought,” he says Brendan Dolan-Gavitt, a professor at NYU who participated in the study. “But Copilot’s teaching did not imply good writing, he simply wrote the kind that would be written later.”

Despite such shortcomings, Copilot and similar AI tools can announce fluid changes in the way developers write. There is a growing interest in using AI to help create common tasks. But Copilot also highlights some of the pitfalls of modern AI systems.

Reviewing the number provided to make the Copilot program, Dolan-Gavitt he found that contained a list of prohibited items. This appears to have been initiated so that the system would not publish disturbing messages or copy a known number written by someone else.

Oege de Moor, vice president of research at GitHub and co-founder of Copilot, says security has been a major concern since the beginning. He also said the amount of malicious code cited by NYU investigators only holds a small portion when security errors are likely.

De Moor came out CodeQL, a tool used by NYU researchers known as bugs and code. He also said that GitHub encourages developers to use Copilot in conjunction with CodeQL to ensure that their work is efficient.

The GitHub app is built on top of the AI ​​type created by OpenAI, a well-known AI company that specializes in cutting-edge operations machine learning. This type, called the Codex, contains a large amount of excavation material neural networks trained to predict the following characters in code and computer. Algorithms put billions of lines stored on GitHub – not all good ones – to learn how to write numbers.

OpenAI has developed its own AI coding tool on top of Codex that can do some amazing writing things. It can modify instructions, such as “Create several colors between 1 and 100 and then restore the largest one,” and work in several programming languages.

Another version of the same OpenAI program, called GPT-3, is available make a coherent statement on a given topic, but it can also provide energy offensive or biased language learned from the darkest corners of the internet.

Copilot and Codex have been led some manufacturers to surprise if AI can fire them. Instead, as Naka’s experience shows, developers need special skills to use the program, because they often have to test or change their mind.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *