Case Studies

Discover transformative solutions in action. See firsthand how our Pioneers’ commitment to tackling diverse challenges delivers tangible results. Explore our tailored approaches for navigating complexities, ensuring measurable success.

I. Problem

A language service of a large life science company was facing the typical challenges of common translation workflows. It was difficult to merge translation memory and machine translation into a single paradigm. There was a huge in-house effort to export files, select matching translation memories, send and receive translation packages, import files, and update databases. Additionally, each LSP (Language Service Provider) had a preferred platform to exchange files. The department had little visibility into how the translations were actually made, thus lacking the data to improve the process. Its valuable Multilingual Knowledge System was only used for term recognition.


II. Solution

After a successful PoC (Proof of Concept), the company decided to deploy a language factory. A language factory centralizes all automatic steps such as content recycling, machine translation, automatic correction, quality estimation, etc. The secret of excellence in production often lies not so much in the individual machines but in how they work smoothly together. Collecting data at every process step allows for constant optimization of the factory’s performance. The language factory uses three simple, standardized API calls to communicate with the company’s LSPs for handover. Files to be reviewed by the expert-in-the-loop are posted, the status is polled, and the finished files are fetched.

The language factory connects in a similar automatic way with the company’s content management systems. It uses the COTI standard to collect work and also to place the translated files back in the right place.

By analyzing human edits the factory can train its AI and constantly improve its estimations. The linguistic assets collected in the content repository are used to train the machine translation. The Multilingual Knowledge System identifies domains and topics. This information is used to ensure that the largest chunks of the most relevant content are recycled. Sudden domain switches trigger QA warnings and lower QE scores.

With every project, the factory collects more data, which is nicely visualized in a dashboard. This way, the factory can not only be easily monitored, but certain parameters can be controlled to optimize its operation. Finally, the cost-time-quality triangle can be smartly adjusted to meet business needs.


III. Experiences, Benefits, and Metrics

Months after deployment the language factory has already processed millions of words into 36 languages supported by three LSPs. Besides already delivering significant cost savings of around 28% it allows the department to focus on more value-generating tasks than before. Its language experts can now enforce source text quality, prepare and train MT models, manage multilingual knowledge, define and adapt post-editing criteria, monitor the solution, and analyze process data.

Perhaps most importantly, though, is the constant collection of high-quality multilingual data. These linguistic assets are used for other applications, solving NLP tasks, and training LLMs. The vision is that the department delivers the data and knowledge to support any textual AI initiative of the company. Therefore, it has renamed itself to Language Operations.


I. Problem

What is MVLP and What is a Localization Product?

MLVP or minimum viable localization product is a term that describes the initial stage of a localization product. Apart from human workflow orchestration, MVLP contains no human input apart from data management and curation that is used to train the AI responsible for generating the MVLP. It is a workflow that consists of a raw MT output, followed by a trained AI post-editing process and corresponds to the LangOps manifesto to “Build Language-agnostic”.

Localization product is a concept that can be compared to a software product in many ways. It is helpful to think about it in terms of DevOps practices and how product is manipulated and iterated in sprints until it reaches its final form. Each version of localization product presents different added value that is defined by localization sprint objectives.


II. Solution

Why We Need a Minimum Viable Localization Product?

MVLP serves as the base working product of localization. Having been machine translated and AI post-edited, it is a full placeholder for content that, while not finalized, can be used in in gathering data on user behavior and traffic analysis.

This was also the case where a client of Native Localization agreed to create a workflow that introduced real-time data into their localization decision-making.

A client, who is a software development company in the Fintech sector maintains a product platform, as well as the knowledge base for their customers to be able to use their product to its maximum efficiency. After Native had performed product string localization, it made sense to follow up with the localization of the knowledge base. However, their localization budget was spent for that fiscal year and this portion of content was not included, which was quite substantial as supporting documentation tends to be.

It would not be efficient from a UX perspective to have disparity between content, so the solution required a LangOps based approach, which would dictate that we must “leverage all data and tech” in efforts to make smart localization decisions. An MVLP was created for five main topic articles into 16 languages using DeepL MT engine, accompanied by OpenAI powered AI engine, trained with approved translation memory data and terminology data from previous localization work. The AI engine was further prompted on style, untranslatables, product names, etc. In this case this data was enough to reduce the parity towards human output to minimum. Articles were published in an MVLP form for a month. This provided enough Google Analytics data allowing Native to then execute a Blackbird automation which gathered the Analytics data and then created a report of which articles in which languages generated over 10 000 impressions, over 5000 impressions and over 2000 impressions. Based on the report Native proposed a staggered localization effort with three priority levels, giving our client a chance to invest in localization where it mattered the most.

A surplus solution was later applied to marketing, where MVLP was used in marketing A/B testing with smaller audiences. The previously trained AI model (now supplemented with marketing related data such as ICPs, content pillars, messaging intent, etc.) was used to gauge what ideas resonate more than others in 16 global markets at once. The data was later used for marketing specialists to draw insights from and create campaigns that performed on average 20% better than their previous ones. In addition to that, the insights cost fraction of what localized A/B testing would cost a year before.


III. Experiences, Benefits, and Metrics

Learnings and Benefits

Applying comparatively new LangOps concepts to real time use cases often provide a lot of conclusions and data for further iteration. In this case we learned that in order for MVLP to be as close to human parity, the data used to train the AI matters a lot. Results may warry, but in order to create a sufficient level MVLP, data classification and baseline has to be created to ensure the MVLP is actually useable.

MVLP significantly brings down the cost of A/B testing because the workflow is rather simple. Reduced costs potentially allow designers to be a bit more positively careless in their ideas, inviting more creativity and freedom without worrying about the expenses. Wider A/B testing means better live results.

MVLP invites live data to localization workflows. Current digital landscape relies on applying data as quickly as possible to create impact. MVLP is the first iteration of a localization product that is adjusted and polished with each localization sprint in efforts for the software to become a truly relevant and resonating on the global scale.


I. Problem

Language Service providers and internal language departments often face the challenge that they receive content for translation which is technologically and linguistically unsuitable for translation. In order to improve the quality of the source language, it is important to reach out to upstream processes and win them over as sponsors for a global content delivery process. Also, as stated in the LangOps manifesto, the world is changing from one-way communication paradigm to a conversation or bidirectional flow of information. And of course corporate end customers are demanding AI-driven solutions. LSPs and internal language services will have to cater to these new needs in order to stay relevant.


The challenge has been that traditional TMS and CAT tools are built for experts and not for upstream, non-linguistic stakeholders. Therefore it has always proven difficult to onboard content creators, developers, engineers and the like onto a common platform.


II. Solution

Our LangOps solution combines all the functionality and data access points which corporate end users need in order to interact with “language”. This includes manual and automatic content and translation project creation, like you find in traditional localization portals, of course. But much more than this, it also provides terminology retrieval, management and verification options, machine translation solutions, taxonomies and structure data, systematic translator query management which helps pinpoint content issues, review or quality management features and more. These functionalities are completely customizable to keep the user interface simple and deliver optimal, tailored user experience. That way, onboarding enterprise-wide stakeholders is much easier and faster.


On the back-end, our portal integrates with the traditional TMSs and BMSs, but also authoring tools, content management platforms and proprietary or commercial corporate tools which can consume language data. We make sure all these platforms are kept up to date on the data. By integrating linguistic assets into corporate tools and platforms, we bring their functionality directly to the end users and thus increase the benefits and values customers get out of them.


III. Experiences, Benefits, and Metrics

We believe our platform is a major step towards a true LangOps platform. It gives corporate users exactly the tools and data they require, integrates with all the required upstream and downstream processes and hides the complexities of language technology from those who do not need to be exposed to it directly.


It has made corporate language management much easier to use and spreads the benefit of linguistic assets to a much larger audience in corporate environments. This in turn makes it much easier to obtain budgets and define upstream processes to improve content and communication throughout the entire organization and in all languages.


Matthias Caesar
I. Problem

In today’s globalized world, businesses often encounter significant challenges when it comes to software development and localization. The traditional silos between these two critical processes can lead to inefficiencies, delays, and even errors in the final product. This divide between software development and localization teams has long been a stumbling block for organizations striving for a global reach.


II. Solution

With LangOps—an innovative approach that serves as a natural extension to DevOps, uniting the worlds of software development and localization seamlessly. LangOps empowers organizations to break down the silos between these traditionally often separate domains, fostering collaboration and accelerating the delivery of localized software products.


III. Experiences, Benefits, and Metrics

With LangOps we accomplishes this by integrating localization considerations into the software design and development pipeline from the very beginning. Here’s how it works:


Early Integration: With LangOps, localization isn’t an afterthought; it’s an integral part of the development or even the design process. Designers and developers work alongside localization experts to ensure that internationalization is considered early on. This prevents common localization issues.


Continuous Localization: LangOps encourages continuous integration and continuous localization. As new features and updates are developed, they are simultaneously localized. This ensures that localized versions are always up-to-date and reduces the lag time between development and localization. This can be achieved with our L10n Portal and Services which include nMT and AI to automate steps along the way, leading to a lean and agile end-to-end process.


Automated Workflows: Automation plays a key role in LangOps. Automated testing, quality assurance, and deployment pipelines streamline the localization process, reducing the potential for human error and saving valuable time.


I. Problem

The demand for rapid, precise, and context-aware translations in the language services industry is at an all-time high. Traditional machine translation systems often miss the subtleties of language, requiring extensive post-editing and failing to meet the specific needs of diverse projects and clients. This challenge necessitates a solution that can understand and replicate the nuances of human language, adapt to various styles and tones, and integrate seamlessly into existing translation workflows.


II. Solution

GPT Integration in translate5


translate5’s innovative approach integrates Generative Pre-trained Transformer (GPT) technology as a customizable machine translation engine. This solution enables project managers (PMs) to create bespoke language resources tailored to each project’s unique requirements, leveraging:


Visual Translation Feature

translate5 offers a “What You See Is What You Get” (WYSIWYG) interface, allowing translators to work with the text within the layout for various source file formats, including CMS, Office, InDesign, video subtitling, and Android/iOS apps. This feature ensures translations fit the visual and cultural context of the original document, addressing challenges such as text length and layout compatibility.


Custom Training for GPT

PMs can train GPT with system messages, example data, and terminology, utilizing linguistic resources stored in translate5. This process, similar to onboarding a new translator with a style guide, ensures the AI’s output aligns closely with project expectations.


Collaborative Development and the Open Source Advantage

The successful integration of GPT within translate5 is the result of collaboration between the translate5 team, led by MittagQI, and World Translation. This partnership has facilitated technical development and ensured the solution meets the high standards required by professional translation services. As a third-generation open-source project, translate5 is backed by MittagQI, driving innovation, development, support and maintenance.


III. Experiences, Benefits, and Metrics

Evaluation and Impact

Translating technical documentation for Leica Geosystems from English to German showcased GPT’s capabilities, with its output compared against DeepL. Independent evaluations by experienced translators highlighted GPT’s fluency, idiomatic precision, and alignment with the client’s desired style and tone. Feedback emphasized GPT’s superior handling of style and readability, though noting the need for improvement in translation precision.

This advancement enables PMs to quickly create MT language resources in translate5, customized for each client or project. This transformation requires PMs to possess a deep linguistic understanding, making them prompt engineers who tailor AI output to client expectations, enhancing both efficiency and quality.


Conclusion

The integration of GPT into translate5 marks a significant advancement in translation technology, offering a customizable, efficient, and accurate solution for language service providers. This case study exemplifies the potential of AI and human expertise to meet the translation industry’s evolving demands, setting new benchmarks for quality and innovation. As translate5 continues to explore GPT’s use for various applications, it builds its leadership in leveraging AI to enhance language services.


Marion Randelshofer
I. Problem

The demand for rapid, precise, and context-aware translations in the language services industry is at an all-time high. Traditional machine translation systems often miss the subtleties of language, requiring extensive post-editing and failing to meet the specific needs of diverse projects and clients. This challenge necessitates a solution that can understand and replicate the nuances of human language, adapt to various styles and tones, and integrate seamlessly into existing translation workflows.


II. Solution

GPT Integration in translate5

translate5’s innovative approach integrates Generative Pre-trained Transformer (GPT) technology as a customizable machine translation engine. This solution enables project managers (PMs) to create bespoke language resources tailored to each project’s unique requirements, leveraging:


Visual Translation Feature

translate5 offers a “What You See Is What You Get” (WYSIWYG) interface, allowing translators to work with the text within the layout for various source file formats, including CMS, Office, InDesign, video subtitling, and Android/iOS apps. This feature ensures translations fit the visual and cultural context of the original document, addressing challenges such as text length and layout compatibility.


Custom Training for GPT

PMs can train GPT with system messages, example data, and terminology, utilizing linguistic resources stored in translate5. This process, similar to onboarding a new translator with a style guide, ensures the AI’s output aligns closely with project expectations.


Collaborative Development and the Open Source Advantage

The successful integration of GPT within translate5 is the result of collaboration between the translate5 team, led by MittagQI, and World Translation. This partnership has facilitated technical development and ensured the solution meets the high standards required by professional translation services. As a third-generation open-source project, translate5 is backed by MittagQI, driving innovation, development, support and maintenance.


III. Experiences, Benefits, and Metrics

Evaluation and Impact

Translating technical documentation for Leica Geosystems from English to German showcased GPT’s capabilities, with its output compared against DeepL. Independent evaluations by experienced translators highlighted GPT’s fluency, idiomatic precision, and alignment with the client’s desired style and tone. Feedback emphasized GPT’s superior handling of style and readability, though noting the need for improvement in translation precision.

This advancement enables PMs to quickly create MT language resources in translate5, customized for each client or project. This transformation requires PMs to possess a deep linguistic understanding, making them prompt engineers who tailor AI output to client expectations, enhancing both efficiency and quality.


Conclusion

The integration of GPT into translate5 marks a significant advancement in translation technology, offering a customizable, efficient, and accurate solution for language service providers. This case study exemplifies the potential of AI and human expertise to meet the translation industry’s evolving demands, setting new benchmarks for quality and innovation. As translate5 continues to explore GPT’s use for various applications, it builds its leadership in leveraging AI to enhance language services.