Case Studies

Discover transformative solutions in action. See firsthand how our Pioneers’ commitment to tackling diverse challenges delivers tangible results. Explore our tailored approaches for navigating complexities, ensuring measurable success.

I. Problem

A language service of a large life science company was facing the typical challenges of common translation workflows. It was difficult to merge translation memory and machine translation into a single paradigm. There was a huge in-house effort to export files, select matching translation memories, send and receive translation packages, import files, and update databases. Additionally, each LSP (Language Service Provider) had a preferred platform to exchange files. The department had little visibility into how the translations were actually made, thus lacking the data to improve the process. Its valuable Multilingual Knowledge System was only used for term recognition.


II. Solution

After a successful PoC (Proof of Concept), the company decided to deploy a language factory. A language factory centralizes all automatic steps such as content recycling, machine translation, automatic correction, quality estimation, etc. The secret of excellence in production often lies not so much in the individual machines but in how they work smoothly together. Collecting data at every process step allows for constant optimization of the factory’s performance. The language factory uses three simple, standardized API calls to communicate with the company’s LSPs for handover. Files to be reviewed by the expert-in-the-loop are posted, the status is polled, and the finished files are fetched.

The language factory connects in a similar automatic way with the company’s content management systems. It uses the COTI standard to collect work and also to place the translated files back in the right place.

By analyzing human edits the factory can train its AI and constantly improve its estimations. The linguistic assets collected in the content repository are used to train the machine translation. The Multilingual Knowledge System identifies domains and topics. This information is used to ensure that the largest chunks of the most relevant content are recycled. Sudden domain switches trigger QA warnings and lower QE scores.

With every project, the factory collects more data, which is nicely visualized in a dashboard. This way, the factory can not only be easily monitored, but certain parameters can be controlled to optimize its operation. Finally, the cost-time-quality triangle can be smartly adjusted to meet business needs.


III. Experiences, Benefits, and Metrics

Months after deployment the language factory has already processed millions of words into 36 languages supported by three LSPs. Besides already delivering significant cost savings of around 28% it allows the department to focus on more value-generating tasks than before. Its language experts can now enforce source text quality, prepare and train MT models, manage multilingual knowledge, define and adapt post-editing criteria, monitor the solution, and analyze process data.

Perhaps most importantly, though, is the constant collection of high-quality multilingual data. These linguistic assets are used for other applications, solving NLP tasks, and training LLMs. The vision is that the department delivers the data and knowledge to support any textual AI initiative of the company. Therefore, it has renamed itself to Language Operations.


I. Problem

What is MVLP and What is a Localization Product?

MLVP or minimum viable localization product is a term that describes the initial stage of a localization product. Apart from human workflow orchestration, MVLP contains no human input apart from data management and curation that is used to train the AI responsible for generating the MVLP. It is a workflow that consists of a raw MT output, followed by a trained AI post-editing process and corresponds to the LangOps manifesto to “Build Language-agnostic”.

Localization product is a concept that can be compared to a software product in many ways. It is helpful to think about it in terms of DevOps practices and how product is manipulated and iterated in sprints until it reaches its final form. Each version of localization product presents different added value that is defined by localization sprint objectives.


II. Solution

Why We Need a Minimum Viable Localization Product?

MVLP serves as the base working product of localization. Having been machine translated and AI post-edited, it is a full placeholder for content that, while not finalized, can be used in in gathering data on user behavior and traffic analysis.

This was also the case where a client of Native Localization agreed to create a workflow that introduced real-time data into their localization decision-making.

A client, who is a software development company in the Fintech sector maintains a product platform, as well as the knowledge base for their customers to be able to use their product to its maximum efficiency. After Native had performed product string localization, it made sense to follow up with the localization of the knowledge base. However, their localization budget was spent for that fiscal year and this portion of content was not included, which was quite substantial as supporting documentation tends to be.

It would not be efficient from a UX perspective to have disparity between content, so the solution required a LangOps based approach, which would dictate that we must “leverage all data and tech” in efforts to make smart localization decisions. An MVLP was created for five main topic articles into 16 languages using DeepL MT engine, accompanied by OpenAI powered AI engine, trained with approved translation memory data and terminology data from previous localization work. The AI engine was further prompted on style, untranslatables, product names, etc. In this case this data was enough to reduce the parity towards human output to minimum. Articles were published in an MVLP form for a month. This provided enough Google Analytics data allowing Native to then execute a Blackbird automation which gathered the Analytics data and then created a report of which articles in which languages generated over 10 000 impressions, over 5000 impressions and over 2000 impressions. Based on the report Native proposed a staggered localization effort with three priority levels, giving our client a chance to invest in localization where it mattered the most.

A surplus solution was later applied to marketing, where MVLP was used in marketing A/B testing with smaller audiences. The previously trained AI model (now supplemented with marketing related data such as ICPs, content pillars, messaging intent, etc.) was used to gauge what ideas resonate more than others in 16 global markets at once. The data was later used for marketing specialists to draw insights from and create campaigns that performed on average 20% better than their previous ones. In addition to that, the insights cost fraction of what localized A/B testing would cost a year before.


III. Experiences, Benefits, and Metrics

Learnings and Benefits

Applying comparatively new LangOps concepts to real time use cases often provide a lot of conclusions and data for further iteration. In this case we learned that in order for MVLP to be as close to human parity, the data used to train the AI matters a lot. Results may warry, but in order to create a sufficient level MVLP, data classification and baseline has to be created to ensure the MVLP is actually useable.

MVLP significantly brings down the cost of A/B testing because the workflow is rather simple. Reduced costs potentially allow designers to be a bit more positively careless in their ideas, inviting more creativity and freedom without worrying about the expenses. Wider A/B testing means better live results.

MVLP invites live data to localization workflows. Current digital landscape relies on applying data as quickly as possible to create impact. MVLP is the first iteration of a localization product that is adjusted and polished with each localization sprint in efforts for the software to become a truly relevant and resonating on the global scale.


I. Problem

Language Service providers and internal language departments often face the challenge that they receive content for translation which is technologically and linguistically unsuitable for translation. In order to improve the quality of the source language, it is important to reach out to upstream processes and win them over as sponsors for a global content delivery process. Also, as stated in the LangOps manifesto, the world is changing from one-way communication paradigm to a conversation or bidirectional flow of information. And of course corporate end customers are demanding AI-driven solutions. LSPs and internal language services will have to cater to these new needs in order to stay relevant.

The challenge has been that traditional TMS and CAT tools are built for experts and not for upstream, non-linguistic stakeholders. Therefore it has always proven difficult to onboard content creators, developers, engineers and the like onto a common platform.


II. Solution

Our LangOps solution combines all the functionality and data access points which corporate end users need in order to interact with “language”. This includes manual and automatic content and translation project creation, like you find in traditional localization portals, of course. But much more than this, it also provides terminology retrieval, management and verification options, machine translation solutions, taxonomies and structure data, systematic translator query management which helps pinpoint content issues, review or quality management features and more. These functionalities are completely customizable to keep the user interface simple and deliver optimal, tailored user experience. That way, onboarding enterprise-wide stakeholders is much easier and faster.

On the back-end, our portal integrates with the traditional TMSs and BMSs, but also authoring tools, content management platforms and proprietary or commercial corporate tools which can consume language data. We make sure all these platforms are kept up to date on the data. By integrating linguistic assets into corporate tools and platforms, we bring their functionality directly to the end users and thus increase the benefits and values customers get out of them.


III. Experiences, Benefits, and Metrics

We believe our platform is a major step towards a true LangOps platform. It gives corporate users exactly the tools and data they require, integrates with all the required upstream and downstream processes and hides the complexities of language technology from those who do not need to be exposed to it directly.

It has made corporate language management much easier to use and spreads the benefit of linguistic assets to a much larger audience in corporate environments. This in turn makes it much easier to obtain budgets and define upstream processes to improve content and communication throughout the entire organization and in all languages.


Matthias Caesar
I. Problem

In today’s globalized world, businesses often encounter significant challenges when it comes to software development and localization. The traditional silos between these two critical processes can lead to inefficiencies, delays, and even errors in the final product. This divide between software development and localization teams has long been a stumbling block for organizations striving for a global reach.


II. Solution

With LangOps—an innovative approach that serves as a natural extension to DevOps, uniting the worlds of software development and localization seamlessly. LangOps empowers organizations to break down the silos between these traditionally often separate domains, fostering collaboration and accelerating the delivery of localized software products.


III. Experiences, Benefits, and Metrics

With LangOps we accomplishes this by integrating localization considerations into the software design and development pipeline from the very beginning. Here’s how it works:

Early Integration: With LangOps, localization isn’t an afterthought; it’s an integral part of the development or even the design process. Designers and developers work alongside localization experts to ensure that internationalization is considered early on. This prevents common localization issues.

Continuous Localization: LangOps encourages continuous integration and continuous localization. As new features and updates are developed, they are simultaneously localized. This ensures that localized versions are always up-to-date and reduces the lag time between development and localization. This can be achieved with our L10n Portal and Services which include nMT and AI to automate steps along the way, leading to a lean and agile end-to-end process.

Automated Workflows: Automation plays a key role in LangOps. Automated testing, quality assurance, and deployment pipelines streamline the localization process, reducing the potential for human error and saving valuable time.


I. Problem

The demand for rapid, precise, and context-aware translations in the language services industry is at an all-time high. Traditional machine translation systems often miss the subtleties of language, requiring extensive post-editing and failing to meet the specific needs of diverse projects and clients. This challenge necessitates a solution that can understand and replicate the nuances of human language, adapt to various styles and tones, and integrate seamlessly into existing translation workflows.


II. Solution

GPT Integration in translate5

translate5’s innovative approach integrates Generative Pre-trained Transformer (GPT) technology as a customizable machine translation engine. This solution enables project managers (PMs) to create bespoke language resources tailored to each project’s unique requirements, leveraging:

Visual Translation Feature

translate5 offers a “What You See Is What You Get” (WYSIWYG) interface, allowing translators to work with the text within the layout for various source file formats, including CMS, Office, InDesign, video subtitling, and Android/iOS apps. This feature ensures translations fit the visual and cultural context of the original document, addressing challenges such as text length and layout compatibility.

Custom Training for GPT

PMs can train GPT with system messages, example data, and terminology, utilizing linguistic resources stored in translate5. This process, similar to onboarding a new translator with a style guide, ensures the AI’s output aligns closely with project expectations.

Collaborative Development and the Open Source Advantage

The successful integration of GPT within translate5 is the result of collaboration between the translate5 team, led by MittagQI, and World Translation. This partnership has facilitated technical development and ensured the solution meets the high standards required by professional translation services. As a third-generation open-source project, translate5 is backed by MittagQI, driving innovation, development, support and maintenance.


III. Experiences, Benefits, and Metrics

Evaluation and Impact

Translating technical documentation for Leica Geosystems from English to German showcased GPT’s capabilities, with its output compared against DeepL. Independent evaluations by experienced translators highlighted GPT’s fluency, idiomatic precision, and alignment with the client’s desired style and tone. Feedback emphasized GPT’s superior handling of style and readability, though noting the need for improvement in translation precision.

This advancement enables PMs to quickly create MT language resources in translate5, customized for each client or project. This transformation requires PMs to possess a deep linguistic understanding, making them prompt engineers who tailor AI output to client expectations, enhancing both efficiency and quality.

Conclusion

The integration of GPT into translate5 marks a significant advancement in translation technology, offering a customizable, efficient, and accurate solution for language service providers. This case study exemplifies the potential of AI and human expertise to meet the translation industry’s evolving demands, setting new benchmarks for quality and innovation. As translate5 continues to explore GPT’s use for various applications, it builds its leadership in leveraging AI to enhance language services.


Marion Randelshofer
I. Problem

The demand for rapid, precise, and context-aware translations in the language services industry is at an all-time high. Traditional machine translation systems often miss the subtleties of language, requiring extensive post-editing and failing to meet the specific needs of diverse projects and clients. This challenge necessitates a solution that can understand and replicate the nuances of human language, adapt to various styles and tones, and integrate seamlessly into existing translation workflows.


II. Solution

GPT Integration in translate5

translate5’s innovative approach integrates Generative Pre-trained Transformer (GPT) technology as a customizable machine translation engine. This solution enables project managers (PMs) to create bespoke language resources tailored to each project’s unique requirements, leveraging:

Visual Translation Feature

translate5 offers a “What You See Is What You Get” (WYSIWYG) interface, allowing translators to work with the text within the layout for various source file formats, including CMS, Office, InDesign, video subtitling, and Android/iOS apps. This feature ensures translations fit the visual and cultural context of the original document, addressing challenges such as text length and layout compatibility.

Custom Training for GPT

PMs can train GPT with system messages, example data, and terminology, utilizing linguistic resources stored in translate5. This process, similar to onboarding a new translator with a style guide, ensures the AI’s output aligns closely with project expectations.

Collaborative Development and the Open Source Advantage

The successful integration of GPT within translate5 is the result of collaboration between the translate5 team, led by MittagQI, and World Translation. This partnership has facilitated technical development and ensured the solution meets the high standards required by professional translation services. As a third-generation open-source project, translate5 is backed by MittagQI, driving innovation, development, support and maintenance.


III. Experiences, Benefits, and Metrics

Evaluation and Impact

Translating technical documentation for Leica Geosystems from English to German showcased GPT’s capabilities, with its output compared against DeepL. Independent evaluations by experienced translators highlighted GPT’s fluency, idiomatic precision, and alignment with the client’s desired style and tone. Feedback emphasized GPT’s superior handling of style and readability, though noting the need for improvement in translation precision.

This advancement enables PMs to quickly create MT language resources in translate5, customized for each client or project. This transformation requires PMs to possess a deep linguistic understanding, making them prompt engineers who tailor AI output to client expectations, enhancing both efficiency and quality.

Conclusion

The integration of GPT into translate5 marks a significant advancement in translation technology, offering a customizable, efficient, and accurate solution for language service providers. This case study exemplifies the potential of AI and human expertise to meet the translation industry’s evolving demands, setting new benchmarks for quality and innovation. As translate5 continues to explore GPT’s use for various applications, it builds its leadership in leveraging AI to enhance language services.


I. Problem

Thousands of translation and audio-to-text submissions ran through the localization department with manual processes annually. This was maintained by a significant administrative investment. Also, other departments, such as Editorial and Marketing departments, needed increasing quantities of transcription and translation services.


II. Solution

After a thorough review of available technology, the non-profit decided to design its own component-based language factory, with a set of microservices, tied together with the Blackbird.io workflow orchestrator as its backbone with a recursive microservice architecture.


A folder portal (or “hot folder”) was created on Dropbox in which translation or audio-to-text submissions can be placed. A submission is automatically classified based on file type and file name and automatically assigned to the appropriate semi-automated workflows. The file names are created with a simple-to-use file name builder on a Google sheet and those file names are automatically interpreted through Regex (regular expressions) classifications within Blackbird.io.


As each file submission travels through the workflow, the steps are semi-automatically updated on Slack channels and Trello (Kanban style) through Blackbird.io. Trello has its own automations set up to remove and assign individuals to cards at various steps. These product-based cards are templated in Trello and the workflow automatically copies the correct template based off the filename of the original submission.


Aside from translation submissions, Blackbird also enabled a microservice to be built for audio-to-text by using Transkriptor and OpenAI’s API and its Whisper feature. Through prompting and classifications, this microservice can transcribe audio, add paragraphs, timestamps and speaker diarization.


Aside from a MT-only microservice, another TMS microservice classifies files for TMS into four different domains in order to populate four different Translation Memories and Glossaries.


Other microservices can convert files automatically to use the correct one for each tool and can also be used to convert output files to the desired final file formats.


Once a file submission has triggered a workflow, the workflow is setup to add useful information to the files, such as word counts and it archives file versions to an archive folder automatically.


III. Experiences, Benefits, and Metrics

Manual administrative tasks have reduced by around 40 hours a week.

Various digital tools have been, or are in the process of being, sunsetted and replaced by API enabled equivalents and people outside of the localization department can submit translations and audio-to-text requests with ease.

LangOps staff can easily analyse and adjust each step of the process. 3rd party components of the workflow can easily be switched out or adjusted. Language assets can be used, independent from TMS, for further LangOps workflows and model training.


I. Problem

MakesYouLocal, an e-commerce localisation agency , specialises in helping online businesses thrive in new markets through localised customer service, marketing, and translation solutions. While they were already delivering quality translations, MakesYouLocal recognised an opportunity to enhance their efficiency and scalability to better serve their expanding client base. The goal was to increase productivity, reduce costs, and maintain the high quality their clients expected without compromising on speed or accuracy.


II. Solution

To achieve these objectives, MakesYouLocal partnered with EasyTranslate to implement HumanAI, an advanced technology that merges artificial intelligence with human expertise. This solution was designed to optimise their translation process, making it faster, more cost-effective, and scalable, while preserving the essential human touch that ensures cultural and linguistic accuracy.

Key elements of the solution included:


  • AI-Driven Pre-Translation: HumanAI uses sophisticated machine learning algorithms to pre-translate content, significantly reducing the workload for human translators. The AI was trained to align with MakesYouLocal’s specific linguistic preferences and terminologies, ensuring that the initial output was already of high quality.

  • Human Oversight and Refinement: After the AI completes the pre-translation, language leads step in to review and perfect the content. This collaboration ensures that the final translations meet their stringent quality standards and reflect the appropriate cultural and brand nuances.

  • LangOps Platform: EasyTranslate’s LangOps platform provided an intuitive, collaborative interface where MakesYouLocal’s team could manage, review, and edit translations. This platform enhanced workflow efficiency and enabled seamless communication among team members.

  • III. Experiences, Benefits, and Metrics

    The integration of EasyTranslate’s HumanAI technology into MakesYouLocal’s operations brought substantial improvements, delivering significant benefits in terms of efficiency, cost savings, and translation quality.

    1. Increased Efficiency:

    – Result: The time required to complete translation projects was dramatically reduced, with projects that previously took 20 hours now being completed in just 2 hours.

    -Impact: This tenfold increase in productivity allowed MakesYouLocal to take on more projects, enhance their service offerings, and better meet client demands without adding strain to their resources.

    2. Cost Savings:

    – Result: Translation costs were reduced by 90%, significantly lowering operational expenses.

    – Impact: These savings allowed MakesYouLocal to maintain competitive pricing while increasing profit margins, further strengthening their market position.

    3. Maintained High-Quality Standards:

    – Result: HumanAI achieved an exceptional accuracy rate of one mistake per 1,000 words, far surpassing traditional benchmarks.

    – Impact: This high level of accuracy ensured that MakesYouLocal could continue to deliver consistent, high-quality translations that met their clients’ expectations for brand voice and cultural relevance.

    4. Enhanced Scalability:

    – Result: With the efficiency gains and reduced costs, MakesYouLocal was able to scale its operations more effectively.

    – Impact: They expanded their capacity to handle more projects, allowing them to grow their client base and enhance their competitive edge in the e-commerce localisation market.

    Conclusion

    The implementation of EasyTranslate’s HumanAI technology was a strategic move that allowed MakesYouLocal to optimise its translation processes, resulting in enhanced efficiency, significant cost savings, and maintained high-quality standards. By leveraging the strengths of both AI and human expertise, MakesYouLocal was able to better serve its clients, expand its business, and solidify its position as a leader in e-commerce localisation.


    Cinzia Bazzani
    I. Problem

    A large manufacturing company struggled with fragmented technical documentation workflows. Centralizing diverse content formats, translation memory, and machine translation processes into a single data-driven enviroment was challenging. There was a huge in-house effort to reuse siloed technical content in different formats and platforms, send and receive translation packages, import files, and update databases. Each department used different tools for writing, translating, and searching documents, leading to inefficiencies and a lack of organized, centralized data.


    II. Solution

    Following a successful proof of concept (PoC), the company implemented LOGOSYS, the Logos Multilingual Content Hub. LOGOSYS integrates AI-powered solutions for:


  • Content Conversion: Transforms unstructured content into any Dita XML-based CMS format.

  • Content Optimization: Uses the myAuthorAssistant app to apply custom terminology, authoring data and writing rules

  • Content Generation: Produces coherent and relevant content based on AI patterns and datasets.

  • Content Search: Quickly retrieves information using AI-driven search capabilities.

  • Translation: Enhances translation quality by combining human edits with Neural Machine Translation (NMT) outputs. This approach refines the accuracy of translations and leverages stored linguistic resources to train both generative AI models and NMT systems. The system continuously monitors and adjusts key parameters, optimizing performance while balancing cost, time, and quality. By adhering to the COTI standard, LOGOSYS ensures efficient collection of work and seamless integration of translated files back into the CMS/NMT translation with human post-editing.

  • III. Experiences, Benefits, and Metrics

    Since implementing LOGOSYS, the hub has efficiently processed millions of words in 30 languages, achieving a 40% reduction in overall costs.

    Key Benefits:


  • Cost Efficiency: Reduced translation and documentation costs by 40%.

  • Increased Productivity: Enabled focus on enhancing source quality, training generative AI models, and managing multilingual knowledge.

  • Enhanced Quality: Ensured consistent terminology and writing standards across documents.

  • Faster Retrieval: Improved response times for technical support with efficient content search.

  • Valuable Data: Built a comprehensive data repository for NLP tasks and custom large language model training, supporting broader AI initiatives.

  •  


    Cristina Adrio Gonzáles
    I. Problem

    Many organizations are rushing to adopt new technologies and tools based on what their competitors have announced, without evaluating if these solutions fit their specific needs. This leads to wasted resources, frustrated teams, and inefficient processes. A common scenario involves businesses investing in AI-driven platforms and automation technologies that either don’t integrate with their internal tools or fail to meet the unique requirements of their workflows. As a result, companies end up with expensive solutions that don’t deliver the expected efficiency gains.

    This issue is further complicated in the context of multilingual operations, where linguistic quality, cultural adaptation, and functional integration must be considered. Failing to accurately assess these aspects early in the customer journey, especially during discovery calls and pre-sales discussions, results in a misalignment between customer expectations and the solutions provided. The challenge is not only technical but also communicative—how do we ensure that customers understand what they need versus what they think they want based on market trends?


    II. Solution

    Achieving true efficiency gains starts long before the implementation phase—it begins with communication. A comprehensive, structured assessment of each customer’s specific needs and existing ecosystem is critical. This includes:

    1. Deep Dive Discovery Sessions: Engaging with the customer to map out their current tools, workflows, and business goals. By involving both technical and linguistic experts from the very beginning, we can assess not only the technology fit but also the linguistic accuracy required for multilingual applications.

    2. Tailored Solution Architecture: Rather than pushing a one-size-fits-all approach, we co-create a solution blueprint that aligns with the customer’s existing systems and anticipates future growth. This involves a thorough evaluation of AI integration, automation capabilities, and linguistic quality management. The aim is to ensure that any technology investment results in streamlined workflows, not additional complexity.

    3. Pre-Implementation Simulations: Before finalizing the tech stack, we conduct simulations using the customer’s real data and scenarios. This helps in identifying potential bottlenecks and ensures that the selected tools can seamlessly handle the workload, maintaining the desired levels of quality and accuracy in multilingual content.

    4. Transparent Metrics and Feedback Loops: Establishing clear KPIs from the outset ensures that both parties can measure success. Regular feedback loops during pre-sales allow for adjustments in the solution design, making sure that the customer’s expectations are aligned with achievable outcomes.


    III. Experiences, Benefits, and Metrics

    Customers who underwent this thorough pre-sales and pre-implementation assessment have experienced significant efficiency improvements, often surpassing their initial expectations. For example:

    One global financial client, initially fixated on implementing a high-profile Machine translation tool, discovered through our assessment that their internal systems were not optimized for such a solution. By adjusting the strategy to fit their existing tools and focusing on key integrations, they saved 25% in implementation costs and improved translation turnaround times by 40%.

    Another client, a multinational pharmaceutical company, faced issues with linguistic inconsistencies in their multilingual documentation. By involving linguistic experts early in the discovery phase, we ensured that the AI tools chosen for the project were tailored to their specific domain terminology and regional language variations. As a result, translation quality improved by 30%, and post-editing efforts were reduced by half.

    Overall, customers report greater confidence in their technology investments, as the solutions are clearly aligned with their operational goals. More importantly, by establishing communication as the foundation of the process, we’ve eliminated the guesswork that often leads to project delays and budget overruns.

    Conclusion

    Efficiency gains in technology adoption, particularly in language operations, don’t happen by accident. They are the result of deep customer understanding, transparent communication, and a tailored approach to solution design. By focusing on these aspects during the pre-sales and pre-implementation phases, companies can avoid costly mistakes, unlock the full potential of their tech investments, and ultimately drive better outcomes. This proactive, communicative approach is not just a best practice—it’s essential for long-term success.


    Serhiy Dmytryshyn
    I. Problem

    A rapidly growing SaaS company faced a pressing challenge: scaling its multilingual content operations to support a user base across 25+ countries. Their existing localization workflow was fragmented, relying on manual updates, email exchanges, and disconnected tools. This inefficiency led to delayed product launches, inconsistent translations, and mounting frustration among both developers and translators. The company needed a scalable, streamlined approach to manage localization, ensuring a seamless experience for their global audience without sacrificing speed or quality.


    II. Solution

    The company partnered with Crowdin to implement a comprehensive Language Operations (LangOps) strategy. Using Crowdin’s centralized platform, they integrated localization into their CI/CD pipeline, allowing real-time synchronization between their development tools and translation processes. The team utilized Crowdin’s advanced features, including:


    Automation: Automated file management and content updates eliminated manual errors.

    Collaboration: Translators, developers, and content creators worked together on a single platform, with real-time feedback loops.

    AI-Powered Tools: Crowdin’s AI-assisted translation tools boosted translator productivity and maintained consistency across languages.

    Custom Workflows: Tailored workflows ensured translations were reviewed and published with minimal disruption to the development timeline.

    By embedding LangOps directly into their processes, the company turned localization from a bottleneck into a competitive advantage.


    III. Experiences, Benefits, and Metrics

    The transition to Crowdin’s LangOps platform delivered immediate and measurable results:


    Faster Time-to-Market: Localization time was reduced by 40%, enabling simultaneous global launches.

    Improved Quality: Consistency in translations improved by 35%, thanks to shared glossaries and translation memory.

    Increased Team Efficiency: Automation and real-time collaboration cut repetitive tasks by 50%, freeing up resources for creative work.

    User Engagement: Customer satisfaction scores improved in non-English-speaking regions due to more culturally nuanced and error-free translations.

    The team praised Crowdin’s intuitive interface and robust API integrations, which fit seamlessly into their existing workflows. One project manager noted, “Crowdin transformed localization from an afterthought to a core part of our product development process.”


    By adopting a LangOps mindset with Crowdin, the company achieved operational excellence, providing an optimal user experience across languages and accelerating its global growth.


    I. Problem

    An organisation with over 2500 language professionals, team leads, and managers was responsible for curating and localising millions of data points every month, the content produced was to be used in global products and services serving millions of users on daily basis.

    Written documentation was the source of truth for the whole organisation, which included curation guidelines, style guides, language guides and SOPs that everyone followed for their daily operations, but as the products expanded in scale and scope, it became apparent that the existing decentralised structure of documentation was constantly challenging consistency, recency and usability across the organisation.

    Products were localised into more than 50 languages, each with their own unique requirements that needed to be considered at all stages of the product’s life cycle. Such requirements were hosted sporadically across documents with various levels of access and ownership, leading to the aforementioned issues of consistency and accuracy.

    Due to the distributed nature of the workforce which came from over 70 different countries, English was chosen as the official language for all communication and documentation, so a set of standards and quality measures were needed to always ensure clarity of instructions and purpose, and accessibility for all levels of English proficiency.

    The vast number of cultures in the project also meant that guidance and training on cultural sensitivities was also needed for a cohesive and harmonious work environment that encouraged respectful collaboration.


    II. Solution

    A central platform to host all documentation that adhered to strict privacy and security requirements and provided workers of all levels with the required up-to-date documentation for their products and locales.

    The new platform design started by identifying the needs of the different teams, and their existing documentation structure, which provided great insight on possible structures that could be applied to the design of the new system.

    The new platform needed to meet the following requirements:



    • Multilayered access levels for the different users of the platform.

    • A clear and easy to navigate structure that enables users to find the information they need with ease.

    • Easy editing and formatting tool to enable documentation creators and users to create and consume appealing and informative documents.

    • A robust infrastructure with near 100% uptime due to the highly distributed nature of the organisation.


    Due to the products nature and the high level of language integration, it was crucial to accommodate the specific needs of each language team in the requirement design, and how it fits within the overall structure of the knowledge management system. Language guidance was not only for linguists, but for anyone creating the product to use as a reference of what may be needed to accommodate specific language needs.

    The platform was built based on the specified criteria, and work was ongoing to sift through the massive volume of documentation (around 7000 unique documents pre-migration) to evaluate the quality of the existing material and avoid any of its shortcomings.

    Redundant and obsolete knowledge was identified and isolated, general level instructions were consolidated into general-purpose documentation, style guides and templates were standardised to ensure consistency of source language, language styles and terminology were reviewed, updated and publicised for the whole organisation, training was provided to all stakeholders on the new platform and pilot programs were conducted to ensure stability and proper functionality before the full scale migration.


    III. Experiences, Benefits, and Metrics

    Before the start of the migration, it was crucial to inform and educate all stakeholders of the upcoming change and addressing any concerns they have about the process. Feedback was collected and acted upon, generating new requirements to be implemented in the new platform.

    The involvement and support from stakeholders at all levels ensured that the migration was adopted, rather than forced, which led to the successful and seamless migration of more than 5000 documents (after the culling of obsolete and redundant material), 80% of which were completed in the first week of the three weeks needed to fully migrate.

    After the migration was completed, a significant improvement in product quality was noticed (over 20% according to the internal quality scores), which included higher quality localisation due to the integration of language needs throughout the product lifecycle.

    The centralisation of knowledge across these many teams enabled the organisation to have consistent and clear messaging that was not possible before, which reflected in the positive reception of the new products that were released after the migration.


    I. Problem

    The rapidly evolving relationship between human expertise and artificial intelligence in the interpretation technology industry has generated the buzz-worthy catchphrase “Human in the mix” among language industry pundits and presenters, but what does it truly mean? —more importantly what should it mean? We need to map a clear vision for the advancement of AI tools and technologies within the interpreting profession.


    II. Solution

    • Defining “Human in the Mix”: Moving beyond the marketing jargon to articulate a meaningful role for interpreters alongside the release of AI tools.

    • AI-Augmented Workflows: Examining how automation, machine learning, and real-time analytics can and will enhance interpreter performance and efficiency.

    • Balancing Act: Addressing the concerns about over-reliance on AI and ensuring that the technology supports and empowers rather than replaces professional interpreters.

    • Future Visions: What does an ideal partnership between interpreters and technology look like, and how do we think we can build together towards it?


    III. Experiences, Benefits, and Metrics

    Interpreting technology companies are developing technology solutions that respect and amplify human expertise while embracing AI as a powerful ally in the profession. There are already many demonstrable practical examples of how AI is being used today to enhance interpreter productivity without compromising the nuanced decision-making and cultural sensitivity that only human interpreters can and will continue to provide for the foreseeable future.


    Manuel Herranz
    I. Problem

    BYD Auto Japan faced significant challenges as they worked to establish and expand their online presence in the Japanese automotive market. As a Chinese manufacturer entering Japan, they encountered substantial language and cultural barriers between their Chinese operations and Japanese market requirements. The company needed to ensure consistent, high-quality customer service and technical documentation in Japanese while maintaining their brand identity and adapting to local market preferences. A particular challenge was the requirement for accurate technical and marketing material translation that maintained technical precision.


    II. Solution

    To address these challenges, BYD implemented a Deep Adaptive AI Translation solution through their ECO Platform, turning tmx and terminology assets into a vector database and agentic MTQE+LQA in their new metric LQEQA. This solution not only automates document translation using BYD’s terminology through integrated AI-powered translation capabilities across multiple touchpoints, but also handles fluency estimation, cultural adaptation and the verification of technical terms and preferred marketing expressions. It has reduced the amount of CAT-tool dependency. The client has mostly moved to document-level or paragraph-level translation, a much more natural flow in localization, customer service communication and after-sales support, including real-time translation.


    III. Experiences, Benefits, and Metrics

    The implementation of this solution delivered significant benefits across multiple areas of the business. From a business impact perspective, BYD successfully maintained consistent brand messaging across the Japanese market while preserving their global identity, achieved faster market entry and expansion, and effectively supported their “eMobility for Everyone” motto with accessible, culturally appropriate communications. Operationally, the solution streamlined communication between Chinese headquarters and Japanese operations, reduced translation costs and turnaround time with an average of 16hrs/person/week by not using CAT tools.


    Dominic Spurling
    I. Problem

    A tangled pipeline leads to confusion and delays.

    SharkNinja had little experience of localization. International expansion hadn’t been a priority for them, and they didn’t yet appreciate its full complexity. SharkNinja prides itself on constantly iterating and evolving, but the lack of structure around localization was causing significant challenges.

    Inconsistent handoffs of localizable assets and version control problems were causing inaccuracies and delays.


    II. Solution

    The gamechanger for Rubric was being granted access to SharkClean’s content repository. This transparency allows us to build an asset-based picture of how resources are evolving in real time, as well as allowing us to automate both the pull/push process and regression and validation checks.

    Once we had established a shared Git repository to act as the source of truth for UI strings, we were able to design a branching strategy and connect the repo to our continuous localization framework.


    III. Experiences, Benefits, and Metrics

    Quote from Phoebe Zhang, Program Manager for Robotics at SharkNinja:

    “Moving to Rubric’s process changed everything. It eliminated the confusion and chaos, and since then it’s been really smooth and efficient.”

    – Typical turnaround time went from 14 -> 7 Days

    – Development teams do not have to worry about “strings freeze” or “merge conflicts”. They can make UI label changes at any time. Any last minute strings changes are handled by MT and then checked by a linguist in the next sprint.

    – Localization-readiness criteria are well defined and can be assessed when onboarding new component for localization.

    – RubricCatcher automated QA analyzes content, highlights potential anomalies and out-of-sequence changes, checks terminology and improves consistency, resulting in more accurate translations.

    – Finance administration was improved so that it does not affect our on-time project delivery.


    Yoav Ziv
    I. Problem

    A large international ecommerce platform needed a faster turnaround and reduced cost for the translation of their online content. The reality of extracting data and documents into files, managed by a centralized TMS, then distributed to various language service providers and then collected back was too cumbersome for their business speed and cost requirements.


    II. Solution

    After a successful POC the company decided to deploy a solution that provides the following key elements:



    1. Direct integration – Bypassing the TMS, the company connected directly into the translation flow, so that files and content is transferred to the translation flow without any barrier.

    2. Single Machine-to-Human flow – The company chose to integrate into a provider’s system that processed the entire flow – from translation memory, through machine translation, automatic quality estimate and eventually human post editing – in a single, automated flow.

    3. Setting automated quality thresholds – The flow deployed automatic Quality Estimator which helped assess the MT quality and determine weather a segment should be forwarded to a human translator for post editing or not. The customer ran tests to determine the acceptable automatic QE threshold before such determination was set.


    III. Experiences, Benefits, and Metrics

    After a period of testing and quality sensitivity assessments, Post Editing levels reduced from 100% to about 70%. Beyond a significant reduction of translation cost, the customer was able to discontinue the TMS subscription and save a significant annual subscription cost.


    The translation flow is now directly connected to the content systems and operates in a single, continuous flow.


    Denis Zhilko
    I. Problem

    A big game developer and publisher with over 2 million daily active users had a growing volume of translations and frequent game updates which meant increasing localization costs and scalability challenges. Previously, the company had relied on 100% native-speaker localization, which was both costly and time-consuming, but that approach could not scale to meet the increasing demand.


    The goal was to reduce localization costs by 30% without sacrificing quality. However, with multiple games requiring simultaneous localization across 10+ languages, that was challenging.


    II. Solution

    In order to translate higher volumes at scale, we implemented an AI translation solution tailored for specific content, with human review.


    The key aspects of the solution:

    1. NMT/LLM Evaluation – 10+ models were tested and optimized for all the required language pairs. Based on the AQI (Alconost Quality Index), ChatGPT and DeepL were selected as best-performing models.


    2. LLM Customization – The selected models were customized and enriched with metadata to make sure the model takes into account specific categories of content like item descriptions, quest tasks, dialogues, etc. This allowed AI to process conversational nuances more effectively.


    3. Translation Memory & Glossary Optimization — To enhance the quality of the output and reduce repetitive translation costs, the LLMs were fine-tuned with TM and adjusted to align with the glossary.


    4. Hybrid AI-Human Workflow – The chosen models were trained on the client’s style guides and AI-generated translations were further post-edited by expert linguists.


    5. Continuous Quality Assurance – Regular LQA cycles were established to track AI performance, refine prompts, and fine-tune MT outputs based on real-time feedback from the linguists. At one point the new version of ChatGPT replaced DeepL for all languages except Korean when new tests and prompts showed ChatGPT outperformed DeepL; later we replaced ChatGPT with Gemini for Traditional Chinese. AI models get constantly updated, and it’s important to keep testing and experimenting.


    The pilot project (one game, seven language pairs) took about 2 months. Success with the pilot meant we were able to successfully roll out the MTPE workflow for three more games and 10 languages during the next four months.


    III. Experiences, Benefits, and Metrics

    – 35-50% Cost Savings – The average monthly savings exceeded expectations: 50% for Dutch, 45% for Polish, and 35% for other languages.


    – Trained AI Models Able to Understand Client-Specific Context – The customized LLMs and metadata integration improved handling of game-specific context and made AI translations more dependable, reducing manual corrections.


    – 50% Faster Localization Turnaround – AI-assisted workflows helped meet release deadlines for four simultaneous projects without compromising quality.


    – High Localization Quality – External assessments from independent LQA agencies and user feedback confirmed that customized AI translations met high quality standards, preserving brand voice and in-game immersion.


    – TM- and Glossary Usage in the LLM reduced inconsistencies during the pre-translation stage. This approach drove a higher quality AI output and meant less work for the post-editors which led to even lower costs for the client.


    Conclusion:

    The AI-powered localization strategy successfully reduced costs, improved efficiency, and maintained translation quality, allowing the company to quickly localize new content while meeting their cost reduction goals.


    Prof Dr Rachel Herwartz
    I. Problem

    Children should be able to learn vocabulary using an app that matches their schoolbook


    II. Solution

    The vocabulary is first entered into Excel by the authors and then recorded by professional speakers. The vocabulary is then stored in a professional dictionary management system, while the sounds and images are stored in a media database. Vocabulary trainer apps for iPhone and Android are generated and published from these two systems.


    III. Experiences, Benefits, and Metrics

    By working with langOps, developers can learn what the specific languages need in the production line.


    Maarten Korpershoek
    I. Problem

    The syntactic differences between Turkic languages (such as Turkish, Azerbaijani, and Uzbek) and Indo-European languages (such as English, German, and Dutch) have long posed a significant challenge for traditional machine translation (MT) tools. This is especially true for long compound sentences, to the extent that MT tools have offered little or no benefit in terms of efficiency, let alone quality. Standard large language models (LLMs) likewise fail to deliver the desired quality, partly because they cannot consistently apply translation strategies, prescribed translation shifts, and terminology.


    II. Solution

    For our Turkic-to-Dutch legal translation assignments commissioned by Dutch government bodies, we have set up a workflow that combines instructions tailored to large reasoning models (LRMs) with project files containing translation strategies, prescribed translation shifts, and terminology.


    III. Experiences, Benefits, and Metrics

    • While the results are certainly not perfect, they are significantly better than translations produced by MT tools and LLMs. In many cases, they would meet the standards required of sworn translations.


    • LRMs are surprisingly good at applying translation strategies and shifts defined in linguistic terms. This opens the door to a shift from the dominant example-based approach that relies on translation memories to a more prescriptive approach that specifies strategies and shifts.


    • Anonymizing the source texts remains the most labor-intensive part of the workflow.


    • We expect incremental improvements as LRMs evolve and as we refine our instructions and project files.


    • Ultimately, translation is a reasoning task, making MT tools and single-pass LLMs inherently less suitable than LRMs.


    Nick Lambson
    I. Problem

    The undergraduate localization program at Beijing Language and Culture University (BLCU) faces evolving challenges that reflect the dynamic demands of the global and domestic markets.



    1. Parents seek programs with clear, forward-looking goals that align with cutting-edge career paths, finding traditional translator training programs less reflective of today’s technological landscape.

    2. China’s young graduates are eager to seize opportunities in a competitive job market, where innovative skills are in high demand to support the nation’s rapid economic and technological advancements.

    3. Global trade dynamics, including tariff fluctuations, underscore the growing need for skilled language professionals to advance China’s Belt and Road Initiative and strengthen international collaboration.

    4. The rise of AI is transforming the traditional translation industry, prompting universities to rethink curricula to meet the urgent demand for tech-savvy language talent.


    These challenges highlight the critical need for a modernized approach to language studies, which BLCU is proactively addressing.


    II. Solution

    In 2024, the contributions of BLCU’s localization graduates to the blockbuster video game Black Myth: Wukong attracted positive attention from administrators, who elevated the localization program and endowed it with a new name: AI Translation. This major is based on what we call Intelligent Language Studies, which harmonizes with LangOps principles.


    BLCU Intelligent Translation Studies  ->  LangOps Principle

    Human-centered  ->  Value human contribution

    Tech-supported  ->  Try AI first

    Knowledge-enabled  ->  Embrace data-centric AI

    Scenario-driven  ->  Support all customer facing functions


    In the AI Translation program at BLCU, we don’t just talk the talk, we walk the walk. Freshmen are enrolled in Python programming classes from day one. By the time they graduate, they have solid experience developing software, integrating AI into workflows via APIs, evaluating AI translation quality, managing databases, handling linguistic data, building content management systems, and vibe coding. We have fully transitioned from the value proposition of localization: delivering a product, to LangOps: developing intelligent language systems.


    III. Experiences, Benefits, and Metrics

    Thanks to our adoption of LangOps principles, the employment rate of graduates from the “AI Translation” program ranks #1 among the university’s student body of 22,340. Lintao Han’s and my contribution to the Routledge publication Translation Studies in the Era of Artificial Intelligence has attracted significant attention both in China and abroad. Translation programs at universities worldwide are ripe for change. LangOps is the template, and BLCU’s AI Translation program is the exemplar.


    Lintao Han
    I. Problem

    In April 2025, a 7.9-magnitude earthquake struck Myanmar, prompting China to deploy rescue teams as part of its commitment to global humanitarian aid. However, language barriers posed significant challenges. Myanmar primarily uses Burmese, with English in some regions, while Chinese rescue teams relied on Mandarin, with limited Burmese proficiency. This gap hindered critical tasks such as medical aid, resource distribution, and disaster assessment. For instance, rescuers struggled to interpret local injury descriptions or navigate Burmese place names, slowing response times. The urgency of the crisis demanded rapid, reliable language solutions tailored to disaster scenarios, yet traditional translation tools lacked the speed, adaptability, and multimodal capabilities needed for low-resource languages like Burmese.


    II. Solution

    To address this pressing need, Beijing Language and Culture University, in collaboration with the National Emergency Language Service Corps, developed an AI-powered Emergency Language Service Platform using Deepseek, a leading Chinese large language model. Completed in just 7 hours, the platform integrated text translation, speech translation, place name mapping, and image analysis, aligning with LangOps principles to deliver intelligent, scenario-driven language solutions.


    The platform was built on Deepseek’s automated code generation, enabling rapid development of a lightweight, user-friendly mobile application. From day one, rescue teams and volunteers accessed real-time translations for Chinese, Burmese, and English, covering medical terminology, daily communication, and disaster-specific phrases. Speech translation, powered by integrated speech recognition and synthesis, facilitated on-site communication in noisy environments. The place name module, combining Deepseek’s translation with Google Maps, enabled rescuers to query Burmese locations and receive Chinese or English translations with coordinates. Image analysis identified critical details from disaster-site photos, generating multilingual descriptions to support decision-making.


    This solution marked a shift from traditional translation—focused on delivering static outputs—to LangOps-driven intelligent language systems, dynamically supporting rescue operations through AI and data integration.


    III. Experiences, Benefits, and Metrics

    During the Myanmar earthquake, the platform provided over 5,000 translations, significantly enhancing rescue efficiency. For example, medics used speech translation to diagnose injuries in real time, while logistics teams relied on place name mapping to deliver supplies to remote areas. The platform’s offline mode and noise-resistant speech recognition ensured reliability in challenging conditions. Its rapid development and deployment underscored LLM’s potential as a cornerstone of emergency language services, earning praise from rescue teams and local communities. My related research has amplified global interest in LangOps-driven solutions. The platform’s success positions it as a model for future disaster scenarios, with potential applications in floods, typhoons, and public health crises, reinforcing China’s leadership in AI-driven humanitarian innovation.