One characteristic that enables creativity is the ability to think freely and fearlessly. And it was this creativity that compelled thinkers and philosophers of the past to promulgate the notion that there should be one universal language. A language that could help people of different communities to understand each other in a better way.
Before the 20th century, the word “Esperanto” was used by many scientists. It meant hope. And was specifically attributed to the idea of a man-made artificial language. A language that could endorse peace in the world.
Developments in machine translation can be discussed in various ways. If one studies the research vis-a-vis books and articles on the history of machine translation, one may find that authors have categorized its evolution.
There are different categories that have segmented the history and evolution of machine translation.
The chronological order states the development that took place;
The year 1949 and onwards
1952 and 1960
Post-1960
The category states how the machine translation systems are developed. And what particular approach was developed and then utilized at a certain point in time. Moreover, the results that were achieved utilizing that approach were also analyzed.
This category describes the individual contribution pertaining to the evolution of machine translation. However, this category does not limit its scope to just individuals, rather universities or institutions that have conducted research, and have added pages to the history of machine translation, are also counted.
The development of machine translation has been divided into six phases.
The Beginning: From 1948-1960
Parsing and disillusionment: 1960-1966
The Quiet Decade: 1967-1976
New Birth and Hope: 1976-1980
Contribution from the Japanese: 1980-1990
Post-1990 phase: Role of Web and New Translators
Let us discuss these phases a bit more in detail;
The Beginning: 1948-1960
In the year 1949, a man named Warren Weaver suggested that computers could be used in the process of translation. Moreover, he also termed this activity “ Computer Translation”, suggesting a machine could be utilized to convert one language into another.
A couple of years later, the first official meeting took place at MIT in 1952, named the “Conference on Machine Translation”. Yehoshua Bar-Hillel was the one who lead and organized this conference.
This conference actually sowed the seeds for future machine learning and translation. And we see in 1954 that Georgetown University joined forces with the tech giant IBM and created a system. This system was able to translate around fifty to sixty Russian sentences into the English language.
However, the core reason for the emergence of machine translation was defense. At the time, it played a huge role in code-breaking, especially during the second world war era.
This led to unprecedented development in machine translation, considering the utility of this system. Hence, we see the first journal being published in 1954, titled “ Mechanical translation”. The person who published this journal was named Victor Yngve.
With time and constant endeavors, researchers and authors claimed that within a couple of years, machine translation would become more efficient, effective, and quality-oriented.
Parsing and disillusionment: 1960-1966
Machine translation was initiated with the translation of single words. These words were embedded into a digital dictionary. With time, linguists proclaimed that every language has a unique and different structure. Hence, the better approach would be to opt for parsing.
So parsing is basically described as the segmentation of a sentence into various grammatical components. And then pinpointing those components, meanwhile stating their relationship with each other.
And in the early 1960s, the method of parsing was advocated as the fundamental base for future testing, and development in machine translation.
In 1961, a new domain of machine learning came into being called “Computational Linguistics”. It was actually the amalgamation of computers and linguistics. Experts consciously decided that it was about time that domain expertise of these areas worked together. And only then a quality-oriented product could be delivered.
Three years later, in 1964, the government of America took the initiative to invest further in the advancement of machine translation. Hence, a committee was established called the ALPAC. It stood for Automatic Language Processing Advisory Committee. And After its creation, it immediately started its work.
In 1966, this committee presented a report called the ALPAC Report. However, the report presented a bleak future of Machine translation, claiming it to be a waste of money, time, and resources. Henceforth, further research and development of machine translation should be avoided.
Lo and behold, the ALPAC report had an extremely negative impact on the future of machine translation. And for a number of years, following that time period, the advancement of MT halted.
Post the ALPAC report, a stagnant phase followed, called the Quiet decade. An overall silence propelled for a span of ten years. And there was no significant advancement made in the domain of Machine translation.
On the contrary, trivial endeavors were made by small scientific communities, with little or almost no-allocated budgets. They were mainly based in USSR, European states, and Canada. One important thing to mention here is that none of these communities received any financial report from their state governments. Rather everything that they did was a collaborative effort of these scientific communities.
Then comes the year 1975, a symbol of hope and progress. This is mainly considered as the fourth phase of the evolution of machine translation and learning. And continues for a span of fifteen solid years.
So in 1976, the Meteo system was concocted. This system was responsible for translating weather forecasts. However, the translations that it did were with confined vocabulary and minimal phrases of meteorological accounts.
Two years later in 1978, a system called the “ATLAS 2” was established. This system had the capability to translate proper phrases and sentences from the Korean language to the Japanese language. And this was during a time when Japan and Korea were at each other throats vis-a-vis political issues.
Next comes the fifth phase, spanning from 1980 to 1990. During this phase, Japanese commercial organizations contributed a lot to machine translation and learning.
We witness that vis-a-vis the Japanese Company SHARP.
SHARP was able to develop an Automatic translator called DUET. This system could translate from English to the Japanese language, and vice versa.
With time and Japanese contribution, came the emergence of a system called PIVOT. It was developed by NEC in 1983. It was the first system of translation that was constructed on an algorithm. Moreover, it also gave a boost for the development of future models that took commercial machine translation to another level.
Hence, we witness another system being developed called PENSEE. The credit was given to its creator called OKI3. And they based the transition system on rules, which enabled the machine to translate from Japanese to English, and vice versa.
Another translation system, based on the Rule Approach, came into the picture. It was created by the group called Hitachi, back in 1986. And was named HICATS, i.e, Hitachi Computer-Aided Translation system. It had the ability to translate from Japanese to English.
The post-1990 phase has been considered by many as the explosion of machine translation systems. It started with the input of the Web and professional translators. They particularly focused upon alternate approaches and modes of machine translation. These modes mainly include;
Speech-to speech
Speech-to-text
text-to-speech and
text-to-text.
These modes were basically catering to the commercial side of Machine translation. Hence, more institutions, companies, and individuals started to invent new technologies vis-a-vis applications, to ameliorate the functioning of MT.
In 1993, a global association created a project known as C-STAR. This project was a consortium that was directed for advanced research in Speech Translation.
C-STAR, though an advanced technology concentrated upon making a translation system that could be utilized in the market, where common people or tourists, in this case, when talking to each other, could be benefited. So in simpler terms, this was basically designed to function as a dialogue client travel agent.
The genesis of the first C-STAR system was done in 1993. And that year the consumer audience witnessed what the system could do. However, at the early stages it only tackled three languages;
English
German, and
Japanese
A couple of years later in 1998, a company by the name of Softissimo, created REVERSO. REVERSO primarily focused upon utilizing artificial intelligence to create translation tools that are easy to use. With time REVERSO ameliorated its technology and was able to offer a lot of services to its consumers. To name a few, they were;
Dictionaries not only substituted words from one language to another, rather explored the contextual and idiomatic meaning as well.
Grammatical and spelling checker of different languages
In 2000, we witness the establishment of a system called ALPH. This was manufactured in a Japanese Laboratory. It had the ability to translate from Japanese to English, and Chinese to the English language. Eventually, this technology led to the creation of the first website for involuntary translation. If you are unable to conceptualize what automatic translation is, then just simply picture what Google Translate does.
With the advent of hybrid technology post-2007, came the hybrid machine translation system. And this was the first time when the following three approaches were all used to deliver an efficient translated product;
Quantitative Machine Translation
Example-Based Machine Translation
Rule-based machine translation
So we have basically discussed the major phases in the development of machine translation. Now, it’s time to elucidate the current scenario of machine translation.
With the concoction of every new technology, there is one question that usually is raised. And, i.e, will that technology replace humans?
In the light of machine translation, this question has often led to a major rift in the human translator community. A lot of translators fear that the utility of machine translation would eventually lead to their unemployment.
On the contrary, many experts proclaim that this replacement is not coming any time soon. The reason is that machine translation cannot inculcate the cultural intricacies, nuances, and linguistic preferences of a particular region into the final translated product. Hence, professional human translators are needed for business localization purposes.
However, it does not negate the notion that machine transition is extremely advantageous when it comes to turnaround times, accuracy, and authenticity. Firms that have to deal with large amounts of redundant data need an efficient machine translation system so that they can attain accurate and error-free translation.
Moreover, a large proportion of the basic users of translations are random and normal working-class people looking to translate a few words here and there. For them, it would be an unwise decision to contact a professional human translator every time they feel the need to translate a particular word. Hence, the better and more appropriate choice would be to opt for an automatic translation system, such as Google Translate.
On the other hand, the qualitative aspect of machine-based translation is something that often gets compromised. And unfortunately, will continue to get compromised. Hence, when it comes to technical translation vis-a-vis medical, legal, or business documents, a machine-translated text would never suffice.
Henceforth, humans take the lead over here. Tenets of creativity, technicality, and hidden undertones of linguistic contexts are only kept intact when human translators do the job.
Errors made by machine translations push back the progress and destroy the reputation of MT. Moreover, it also harms the reputation of the organizations. And emboldens the notion that machines can never do the job done by humans.
The advancement being made in the field of artificial intelligence and machine learning capability advocates the rhetoric that MT would be able to perform translations, as well as a human translator, does. On the contrary, the same rhetoric was promulgated back in the 1950s, but it did not play out so.
This shows that there will always be doubt and uncertainty whenever machines are compared to humans. And especially when machine translations are compared to professional human translators.
However, the only futuristic scenario that will work is where both machines and humans work collectively to the best of their abilities. Machines and systems should be utilized in those areas where they are better than humans. And human translators should jump in when machines are unable to produce quality translations.
When you translate to or from the French language, considering linguistic distinctions and dialects, it’s not as simple as translating
Read moreAs you know that in this age of development science is captivating in every field. In the same way medical
Read moreMedical claims are one of the insightful progresses. You can save your time and get to the treatment as soon
Read moreRiyadh is the capital of Saudi Arabia, and the largest metropolis of the Arab country. Boasting of a population of
Read moreJeddah, a Saudi Arabian port city on the Red Sea, is a modern commercial hub of trade and commerce. Equipped
Read moreKhobar (also known as Al-Khobar) is a city located in the Eastern Province of the oil-rich Kingdom of Saudi Arabia.
Read moreBangalore, officially known as Bengaluru is the capital of Karnataka, the southern Indian state of India. Having a huge population
Read morePune, also known as Puna, is the second largest city in the west-central Indian state of Maharashtra. It is the
Read moreMumbai (formerly known as Bombay) is the largest city of India and a cosmopolitan metropolis. Located on India’s west coast,
Read more