looming threat to human developers in the future?


Major technological brands seem to have committed themselves to revolutionizing the software engineering sector. Microsoft took the first step with GitHub and Copilot, its artificial intelligence code block suggestion. It seems that Google wants to go further with a secret project that aims to create code that can write, correct and update itself. The initiative builds on advances in artificial intelligence. This revives the debate about the possible disappearance of human computer scientists in the future.

As reported by third parties familiar with current internal Google developments, the initiative was born under the name of Pitchfork and renamed AI Developer Assistance. This is part of Google’s bets in the field of generative artificial intelligence.

The details of how this revolutionary tool works remain a mystery. However, some that have come to light paint a very interesting picture of what to expect from this project. Pitchfork, or AI Developer Assistance, is itself a tool that uses machine learning to teach code to write and rewrite itself. how? By learning the corresponding styles of programming languages, and applying this knowledge to writing new lines of code.

The original intention behind this project was to create a platform capable of automatically updating the Python code base whenever a new version is released, without requiring intervention or hiring a large number of engineers.

However, the potential of the program turned out to be greater than expected. From now on, the goal is to bring to life a general purpose system that can maintain quality standards in the code, but does not rely on human intervention in development and update tasks.

Google officials need to fix some issues before showing it to the public. Beyond the technical aspects that still need to be covered, the legal plan and the ethical plan are insurmountable. In fact, the California company was at the center of the scene in the middle of the year in the case of the engineer who was fired for saying that LaMDA, its artificial intelligence model for natural language conversations, showed signs of of sensitivity like that of a human.

Pitchfork’s initiative revives the debate about the loss of developers in the future. In fact, when we talk about artificial intelligence, two main streams of thought collide: that of third parties who think of it as a tool, nothing more, and of stakeholders and observer who believed it was only a matter of time before this. becomes a threat to humanity. Feedback is accumulating in the debate and some suggest that general artificial intelligence may be upon us within 5 to 10 years.

Machines will be given common sense. At the stage of general artificial intelligence, they are capable of causal reasoning, that is, the ability to reason about why things happen. Initiatives like Pitchfork are in a prime position to cause human computer scientists to be put in the garage.

And you?

Do current developments in the software engineering pipeline raise legitimate concerns about the future of human computer scientists in the pipeline?
What does the possibility of research leading to general artificial intelligence suggest to you in 5 to 10 years?
How do you see artificial intelligence in 5 10 years? As a tool or as a risk for your work as a developer?

See also:

Is today’s autonomous driving just a futuristic vision of Tesla Motors? The company just changed its Autopilot goals
SEC asks Musk to step down as Tesla chairman, seeks US$40 million fine for out-of-court settlement
Tesla announced that the new computer for fully autonomous driving of its cars is in production and will prove itself this month
Tesla shares fell after its autopilot system was involved in a crash and reports of its car’s batteries catching fire

Leave a Reply

Your email address will not be published. Required fields are marked *