Case Studies

Discover transformative solutions in action. See firsthand how our Pioneers’ commitment to tackling diverse challenges delivers tangible results. Explore our tailored approaches for navigating complexities, ensuring measurable success.

I. Problem

A language service of a large life science company was facing the typical challenges of common translation workflows. It was difficult to merge translation memory and machine translation into a single paradigm. There was a huge in-house effort to export files, select matching translation memories, send and receive translation packages, import files, and update databases. Additionally, each LSP (Language Service Provider) had a preferred platform to exchange files. The department had little visibility into how the translations were actually made, thus lacking the data to improve the process. Its valuable Multilingual Knowledge System was only used for term recognition.

II. Solution

After a successful PoC (Proof of Concept), the company decided to deploy a language factory. A language factory centralizes all automatic steps such as content recycling, machine translation, automatic correction, quality estimation, etc. The secret of excellence in production often lies not so much in the individual machines but in how they work smoothly together. Collecting data at every process step allows for constant optimization of the factory’s performance. The language factory uses three simple, standardized API calls to communicate with the company’s LSPs for handover. Files to be reviewed by the expert-in-the-loop are posted, the status is polled, and the finished files are fetched.

The language factory connects in a similar automatic way with the company’s content management systems. It uses the COTI standard to collect work and also to place the translated files back in the right place.

By analyzing human edits the factory can train its AI and constantly improve its estimations. The linguistic assets collected in the content repository are used to train the machine translation. The Multilingual Knowledge System identifies domains and topics. This information is used to ensure that the largest chunks of the most relevant content are recycled. Sudden domain switches trigger QA warnings and lower QE scores.

With every project, the factory collects more data, which is nicely visualized in a dashboard. This way, the factory can not only be easily monitored, but certain parameters can be controlled to optimize its operation. Finally, the cost-time-quality triangle can be smartly adjusted to meet business needs.

III. Experiences, Benefits, and Metrics

Months after deployment the language factory has already processed millions of words into 36 languages supported by three LSPs. Besides already delivering significant cost savings of around 28% it allows the department to focus on more value-generating tasks than before. Its language experts can now enforce source text quality, prepare and train MT models, manage multilingual knowledge, define and adapt post-editing criteria, monitor the solution, and analyze process data.

Perhaps most importantly, though, is the constant collection of high-quality multilingual data. These linguistic assets are used for other applications, solving NLP tasks, and training LLMs. The vision is that the department delivers the data and knowledge to support any textual AI initiative of the company. Therefore, it has renamed itself to Language Operations.

I. Problem

What is MVLP and What is a Localization Product?

MLVP or minimum viable localization product is a term that describes the initial stage of a localization product. Apart from human workflow orchestration, MVLP contains no human input apart from data management and curation that is used to train the AI responsible for generating the MVLP. It is a workflow that consists of a raw MT output, followed by a trained AI post-editing process and corresponds to the LangOps manifesto to “Build Language-agnostic”.

Localization product is a concept that can be compared to a software product in many ways. It is helpful to think about it in terms of DevOps practices and how product is manipulated and iterated in sprints until it reaches its final form. Each version of localization product presents different added value that is defined by localization sprint objectives.

II. Solution

Why We Need a Minimum Viable Localization Product?

MVLP serves as the base working product of localization. Having been machine translated and AI post-edited, it is a full placeholder for content that, while not finalized, can be used in in gathering data on user behavior and traffic analysis.

This was also the case where a client of Native Localization agreed to create a workflow that introduced real-time data into their localization decision-making.

A client, who is a software development company in the Fintech sector maintains a product platform, as well as the knowledge base for their customers to be able to use their product to its maximum efficiency. After Native had performed product string localization, it made sense to follow up with the localization of the knowledge base. However, their localization budget was spent for that fiscal year and this portion of content was not included, which was quite substantial as supporting documentation tends to be.

It would not be efficient from a UX perspective to have disparity between content, so the solution required a LangOps based approach, which would dictate that we must “leverage all data and tech” in efforts to make smart localization decisions. An MVLP was created for five main topic articles into 16 languages using DeepL MT engine, accompanied by OpenAI powered AI engine, trained with approved translation memory data and terminology data from previous localization work. The AI engine was further prompted on style, untranslatables, product names, etc. In this case this data was enough to reduce the parity towards human output to minimum. Articles were published in an MVLP form for a month. This provided enough Google Analytics data allowing Native to then execute a Blackbird automation which gathered the Analytics data and then created a report of which articles in which languages generated over 10 000 impressions, over 5000 impressions and over 2000 impressions. Based on the report Native proposed a staggered localization effort with three priority levels, giving our client a chance to invest in localization where it mattered the most.

A surplus solution was later applied to marketing, where MVLP was used in marketing A/B testing with smaller audiences. The previously trained AI model (now supplemented with marketing related data such as ICPs, content pillars, messaging intent, etc.) was used to gauge what ideas resonate more than others in 16 global markets at once. The data was later used for marketing specialists to draw insights from and create campaigns that performed on average 20% better than their previous ones. In addition to that, the insights cost fraction of what localized A/B testing would cost a year before.

III. Experiences, Benefits, and Metrics

Learnings and Benefits

Applying comparatively new LangOps concepts to real time use cases often provide a lot of conclusions and data for further iteration. In this case we learned that in order for MVLP to be as close to human parity, the data used to train the AI matters a lot. Results may warry, but in order to create a sufficient level MVLP, data classification and baseline has to be created to ensure the MVLP is actually useable.

MVLP significantly brings down the cost of A/B testing because the workflow is rather simple. Reduced costs potentially allow designers to be a bit more positively careless in their ideas, inviting more creativity and freedom without worrying about the expenses. Wider A/B testing means better live results.

MVLP invites live data to localization workflows. Current digital landscape relies on applying data as quickly as possible to create impact. MVLP is the first iteration of a localization product that is adjusted and polished with each localization sprint in efforts for the software to become a truly relevant and resonating on the global scale.

I. Problem

Language Service providers and internal language departments often face the challenge that they receive content for translation which is technologically and linguistically unsuitable for translation. In order to improve the quality of the source language, it is important to reach out to upstream processes and win them over as sponsors for a global content delivery process. Also, as stated in the LangOps manifesto, the world is changing from one-way communication paradigm to a conversation or bidirectional flow of information. And of course corporate end customers are demanding AI-driven solutions. LSPs and internal language services will have to cater to these new needs in order to stay relevant.

The challenge has been that traditional TMS and CAT tools are built for experts and not for upstream, non-linguistic stakeholders. Therefore it has always proven difficult to onboard content creators, developers, engineers and the like onto a common platform.

II. Solution

Our LangOps solution combines all the functionality and data access points which corporate end users need in order to interact with “language”. This includes manual and automatic content and translation project creation, like you find in traditional localization portals, of course. But much more than this, it also provides terminology retrieval, management and verification options, machine translation solutions, taxonomies and structure data, systematic translator query management which helps pinpoint content issues, review or quality management features and more. These functionalities are completely customizable to keep the user interface simple and deliver optimal, tailored user experience. That way, onboarding enterprise-wide stakeholders is much easier and faster.

On the back-end, our portal integrates with the traditional TMSs and BMSs, but also authoring tools, content management platforms and proprietary or commercial corporate tools which can consume language data. We make sure all these platforms are kept up to date on the data. By integrating linguistic assets into corporate tools and platforms, we bring their functionality directly to the end users and thus increase the benefits and values customers get out of them.

III. Experiences, Benefits, and Metrics

We believe our platform is a major step towards a true LangOps platform. It gives corporate users exactly the tools and data they require, integrates with all the required upstream and downstream processes and hides the complexities of language technology from those who do not need to be exposed to it directly.

It has made corporate language management much easier to use and spreads the benefit of linguistic assets to a much larger audience in corporate environments. This in turn makes it much easier to obtain budgets and define upstream processes to improve content and communication throughout the entire organization and in all languages.