paint-brush
6 Biggest Limitations of Artificial Intelligence Technologyby@shishir
20,405 reads
20,405 reads

6 Biggest Limitations of Artificial Intelligence Technology

by shishirAugust 25th, 2020
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

The release of GPT-3 marks a significant milestone in the development of AI but the path forward is still obscure. Here are six of the major limitations facing data scientists today. Currently, large troves of data sit in the hands of large corporate organizations. These companies have an inherent advantage making it unfair to the little startups who have just entered the AI development race. There is still some work to be done in figuring out the limits to which we use AI, and there is still no consensus on the ethics of implementing it.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - 6 Biggest Limitations of Artificial Intelligence Technology
shishir HackerNoon profile picture

While the release of GPT-3 marks a significant milestone in the development of AI, the path forward is still obscure. There are still certain limitations to the technology today. Here are six of the major limitations facing data scientists today.

1. Access to Data

For prediction or decision models to be trained properly, they need data. As many people have put it, data is now one of the most sought-after commodities ousting oil. It has become a new currency. Currently, large troves of data sit in the hands of large corporate organizations.

These companies have an inherent advantage making it unfair to the little startups who have just entered the AI development race. If nothing is done about this, it would further drive a wedge in the power dynamic between big yech and startups.

2. Bias

The ways biases can creep into data-modeling processes (which fuel AI) is quite frightening, not to mention the underlying (identified or unidentified) prejudices of the creators to factor in. Biased AI is much more nuanced than just tainted data. There are many stages of the deep-learning process that bias can slip through and currently, our standard design procedures simply aren't aptly equipped to identify them.

As this MIT Technology Review article points out, our current method of even designing AI algorithms aren't really meant to identify and retroactively remove biases. Since most of these algorithms are tested only for their performance, a lot of unintended fluff flows through. This could be in the form of prejudiced data, a lack of social context and a debatable definition of fairness.

3. Computing Time

Even though technological advancements have been rapidly extending in recent years, there are still some hardware limitations like limited computation resources (for RAM and GPU cycles) that we have to overcome. Here again, established companies have a significant advantage, given the costs that arise from developing such custom and precise hardware.

4. Cost

Mining, storing and analyzing data will be very costly both in terms of energy and hardware use.

The estimated training cost for the GPT-3 model was $4.6 million. Another video (see below) predicted that for a model similar to the brain, the training costs would be substantially higher than GPT-3, coming in at around $2.6 billion.

Also, given that skilled engineers in these fields are currently a rare commodity, hiring them will definitely dent the pockets of these companies. Here too, putting newer and smaller companies at a disadvantage.

5. Adversarial Attacks

Since AI isn't human, it isn't exactly equipped to adapt to deviations in circumstances. For example, simply applying tape on the wrong side of the road can cause an autonomous vehicle to swerve into the wrong lane and crash. A human might not even register or react to the tape. While in normal conditions the autonomous vehicle may be far safer, it is these outlier cases that we need to be worried about.

It is this inability to adapt that highlights a glaring security flaw that is yet to be effectively addressed. While sometimes 'fooling' these data models can be fun and harmless (like misidentifying a toaster for a banana), in extreme cases (like defense purposes) it could put lives at risk.

6. No Consensus on Safety, Ethics, and Privacy

There is still some work to be done in figuring out the limits to which we use AI. Current limitations highlight the importance of safety in AI and it must be acted upon swiftly. Additionally, most critics of AI argue along lines of the ethics of implementing it, not just in terms of how it makes privacy a forgotten concept, but also philosophically.

We consider our intelligence inherently human and unique. Giving away that exclusivity can seem conflicting. One of the popular questions that arises is that if robots can do exactly whatever humans can and in essence become equal to humans, do they deserve human rights? If so, how far do you go in defining these robot's rights? There are no definite answers here. Given the recency of AI development, the field of philosophy of AI is still in its nascent stages. I am very excited to see how this sphere of AI develops.

Bottom Line

There are certain facets of AI development that have made entry into this field very restrictive. Given the cost, engineering and hardware needs, AI development poses significant capital requirements thus creating high barriers of entry. If this problem persists then the minds behind its development are likely to be predominantly employed by big tech.

In the past, technological revolutions have allowed for new players to burst into the scene with their fresh ideas. This is exactly how the companies we now refer to as big tech (Amazon, Google, Facebook, Apple and others) got their start. While we now begin to untangle the implication of their vast power, the impact that they have had on society is undeniable. It is only fair to presume that allowing new companies and minds to spruce up from a new generation will lead to positive outcomes.

The development of AI can aggravate the dichotomy between those in power and those without. It might also accelerate the divide between those humans with AI and the unfortunate few without. Rather than humans versus AI, the future might look like humans with AI versus humans without.

While that may, ironically, be the most tangible impact of AI development, I don't think that it will be the most significant one. I believe that the philosophical implications of AI are the ones of greatest importance. Though the idea of such a technology making us question the very basic tenets of our existence seems daunting, I think that this experience will be wholly humbling. It hopefully will lead to startling discoveries whose implications transcend mere individuals and companies.

Previously published on: https://lucidityproject.home.blog/blog-feed/