The use of Language Mapping for machine translation

The matter of language mapping is not something for which there is often a need, but just in case, I’ll give a brief description here.

So: You might have a situation where one, or both, languages (usually the target language) does not have an “engine” in the MT Cloud, but you would like to use a language (or pair) which is similar enough, language-wise, that to use it (them) could be of benefit.

Or you do have a TM for exactly that pair, but to use also a neighbouring language in an MT engine might also be useful.

Or even that you do have an MT engine from another provider but, again, looking at another language would be beneficial.

There are two ways to access the mapping table (I shall get back to the table itself below):

  1. You use the Language Codes Mapping Table, which you can open either via Add-Ins > Language Mapping (any changes here will affect all coming projects which use the same – in this case – target language).
  2. Or you can do it in connection with the selection of the plugin SDL Machine Translation Cloud as source for MT/TM – but then only if the project’s languages are included among the available MT engines.

Let’s say – to use an actual case – I have a translation from English to Luxembourgish, which latter language does not figure in the SDL MT Cloud services. However, since Luxembourgish is not too far away from German, using an MT engine for En > De might be useful. But I still don’t want to change the actual project languages. Here is where the language mapping comes in handy.

In this case only the first method can be used, since Luxembourgish is not offered as a target language in SDL’s MT Cloud and therefore the De > Lu pair causes an error message when I try to use SDL Machine Translation Provider for that. Therefore, before I create the project I open the mapping table as described above:

Every category is self-explanatory except MT Code and MT Code (locale). The MT Code is what actually decides the language in question, no matter what language name is specified. So in this case I want German to be mapped onto Luxembourgish, and therefore the MT Code for the latter (ltz) needs to be changed to “ger”. I do that and click OK.

I should add that you can search in all categories at the same time, so the easiest way to find the code I need is to type, in the Search field, “lu” for Luxembourgish and then “de” or “ger” for German. (Instead of scrolling.)

[As for the MT Code (locale), they are for variants of the same language, so that for instance Arabic (U.A.E.), Arabic (Algeria) and Arabic (Egypt) all have the same MT Codes but different locales and thus different MT engines. You cannot do anything with this except if suddenly the MT Cloud provider tells you that they have now introduced e.g. Arabic (Bahrain) with the locale code “arb” – you can then add that to the list even if the plugin provider has not yet updated it.]

Now when I arrive at the step in the project creation wizard where I select to use TM/MT resources (step 3, Translation Resources), I can select the SDL Machine Translation Provider even though the project’s target language does not have an MT engine. The settings will look like this:

As you see, the target language and flag is Luxembourgish, but the actual MT target is German – exactly as I wanted.

If my project’s target language had had its own MT engine but I wanted to look at another (related) language, I could have done that mapping here. Let’s say I translate to Danish but would be helped by looking at an MT engine for Norwegian as the target. Then I would click the View Language Mapping button in this dialog and get the same mapping table as above. There I would change the MT code “dan” into “nor” and get the desired result.

So – in principle simple although it takes a bit of text to explain it. My thanks to the ever-patient Paul Filkin for taking his time to clarify all my confusion.

Read more about the plugin here, about language mapping here, and about the MT Cloud Codes here.

Another excellent book on machine translation

Review of Jörg Porsiel (ed.): Maschinelle Übersetzung für Übersetzungsprofis. 384 pages. BDÜ Fachverlag 2020. €37. Order here (“Einkaufskorb” means “Shopping basket”).

Three years ago, the German translators’ organisation MDÜ published, via its publishing company BDÜ Fachverlag, the excellent machine translation primer “Machine Translation – What Language Professionals Need to Know” (reveiewed by myself here). Its original text was written in German; this follow-up is written in both English and German, and although its title is in German (easily translated into English: Machine Translation for Professional Translators), the fact is that about 57 percent of the text is in English! Still, some of the parts in German are of such importance that I wish they were made available to an even larger audience. (And one is written in a German which seems almost intentionally to confirm the image of German as an unusually difficult language, with long, convoluted sentences – the worst one being an entire paragraph of 9 lines, 83 words. All other contributions, however, are lucidly written.)

This is the book for everyone who wants to (a) get a comprehensive picture of the current situation in the domain of machine translation, and (b) delve deeper into some of the areas which are most important. In particular I would recommend reading the very first of the contributions, Patrick Bessler’s and Aljoscha Burchardt’s “Gute Qualität zum kleinen Preis? Wandel von Erwartungen und Prozessen im Kontext von Maschineller Übersetzung” (Good Quality at Low Cost? Changed Expectations and Processes in the Context of Machine Translation), because it gives such a complete and knowledgeable picture of the whole process, from client to translator/post-editor, stressing the need for knowledge on the part of the client and going into detail as concerns the new types of problems – in particular with the arrival of the neural MT – for both Language Service Providers (LSPs) and translators. (Examples are the new tasks which the LSPs must handle with regard to both clients and translators, the problem of assessing – in advance – the MT quality; and the new types of errors which must be handled.) Much of this – and more – is also touched upon in the foreword by the editor, Jörg Porsiel, but the in-depth coverage here, in only 12 pages, is admirable.

Today, as everyone knows, it’s the so-called neural variant which dominates MT. This has consequences for the handling of the MT suggestions – consequences which are discussed in many places in this book. But for the reader who is interested in the theories behind neural MT, there is a long presentation here (“Neural Machine Translation”, by Josef van Genabith) – 57 pages – with texts on information theory, mathematical expressions, and neural networks which may tax the reader’s concentration powers. However, there are also some parts of much more general interest, such as the detailed discussion of the differences between statistic and neural MT (pp. 73-74); also on human parity and research directions, where the discussion of translation for under-resourced languages is particularly important.

The future that MT brings

In particular I think van Genabith’s thoughts about the future are worth noting: “Going out on a limb, (N)MT will fundamentally change the work of human translators to (i) post-editing raw (N)MT translation outputs, (ii) certifying (post-edited or raw) translations and (iii) moving human translators much more into copy-editing and language and content quality control”. About that future there are further discussions. Donald DePalma writes about “Augmented Translation Intelligence”, where more or less “intelligent” functions will make possible a more extensive use of resources on the net (some interesting examples: “disambiguate words and phrases”, “deliver contextual information”, “suggest locale-specific content”) as well as the facilitation of the cooperation between colleagues. And his CSA colleague Arle Lommel makes (in “At human parity?”) some cutting remarks on the claims that NMT is (almost) on a par with “human” translation. He ends with some sensible suggestions as to what MT developers should concentrate on rather than pursuing the elusive target of “human parity”, namely improved quality estimation (see below), integration with speech technogies, connection with human translators, and simpler deployment.

Another look to the future is presented in “Machine Translation of Novels in the Age of Transformer”. Transformer is, according to the authors, “the state-of-the-art architecture in neural MT”. A project is presented where translations of 12 novels using different methodologies – one of which was Transformer – were evaluated. The authors do not claim any particular degree of “success”; only that Transformer is by far the best of the systems. They also suggest that training on segments longer than isolated sentences will lead to further improvements.

Yet another branch of future development is covered in “Neural Interactive Translation Prediction”; i.e. MT where basis for the MT suggestions are immediately updated. Unsurprisingly, this study indicates that such updating would be preferable to many translators; however, so far I know of only two providers (Lilt and CASMACAT, the one used here) offer that feature. (But the ModernMT service comes very close, with immediate updates of your uploaded TM, where matches take precedence over MT hits.)

When it comes to the future of post-editing work – i.e. editing of MT suggestions in a CAT tool, segment by segment; so-called PEMT – experienced post-editor Sara Grizzo is sceptical (in “Hat Post-Editing ausgedient?”; “Is Post-Editing a thing of the past?”):  this is demanding work which is to a large extent impopular among translators. She has come to the conclusion that, on the whole, PEMT makes sense above all for light post-editing (gisting). For more demanding translations, one should try to make use of MT in ways which are more palatable to the translator.

So what about post-editing itself?

PEMT is of course an important topic for this book, and it is covered in eight contributions. The aforementioned Sara Grizzo has two more contributions: one (“Post-Editing: ein Praxisleitfaden”) is a brief manual on the practise of post-editing. And in “Bezahlmodelle für Post-Editing” (“Payment models for Post-Editing”) she discusses the three main payment practices which are common today; one point being that none of them is really satisfactory. However, in the last contribution to the book, “Edit-Distance Based Compensation for Machine Translation”, Vincent Asmuth describes a variant of one of the models – EDC (for Edit-Distance Calculation) – which seems to take into account the work actually done by the post-editor, such as research and consideration of the MT suggestions, none of which is reflected in the resulting changes (if any) to the suggested translations. This is an interesting variant which could well be used as a starting point for discussion of this matter.

Another point raised by Grizzo is the importance of assessing in advance the amount of work needed for a post-editing job, and in particular the quality of the MT output. So far, only Memsource dares argue that they have a reliable function for this so-called Quality Estimation (as opposed to the Quality Control/Quality Assurance done on the final translation result), and it is briefly described, by Sara Szac and Heidi Depraetere, in “Quality Estimation”. They also describe a project called APE-QUEST, funded by the European Commission. They say that QE “should be used”; however, apart from referring to APE-QUEST – which I don’t believe is generally available – they do not provide any solutions.

Yet more on PEMT

Other articles on PEMT are, first, “DIN ISO 18587 in der Praxis” by Ilona Wallberg: an overview of the ISO standard, the title of which is “Translation services — Post-editing of machine translation output — Requirements”. Personally I am not sure of its importance, but since it is quite expensive (ca. EUR82) it is good to have it described in detail here.

Related to this contribution is “The post-editior’s skill set according to industry, trainers and linguists”, by Clara Ginovart and Antoni Oliver, which lists a number of skills fundamental to PEMT; the most important ones being “decision-making, error identification and respect of PE guidelines”.

“Post-edition – fit für die Praxis” (The Practice of Post-editing), by Uta Seewald-Heeg and Chuan Ding, is in some ways a companion text to Sara Grizzo’s shorter (and more easily read) “Praxisleitfaden”, already mentioned.

And a more psychologically-oriented approach to PEMT is taken by Jean Nitzke in “Problemlösungsstrategien beim Post-Editing in Verbindung mit psychologischen Aspekten” (Problem-solving Strategies in Post-Editing in Connection with Psychological Aspects). A question posed: Can a post-editor work with PEMT every day without the ability to concentrate and the motivation suffering? The perhaps obvious answer given here is that one should strive to work with a mixture of different tasks, and thereby develop methods and strategies for post-editing.

François Massion discusses PEMT from the viewpoint of an LSP (Language Service Provider) in “NMT im Einsatz bei einem Dienstleister” (NTM practiced by an LSP). It contains a section on optimization of post-editing (pp. 270-) which is certainly of general interest. And in a report on the use of terminology in training and customization of MT engines (“Terminologie in der neuronalen maschinellen Übersetzung” by Tom Winter and Daniel Zielinski) there is a discussion on the importance of terminology during translation which is well worth reading, in particular the detailed part on terminology errors in machine translations (pp. 216-).

As for the rest…

Other topics covered are the matter of confidentiality (two articles) and the use of controlled language (and while this discussion is certainly worth while, it is at least my experience that the LSP – not to mention the end-of-the-line translator/editor – extremely seldom has the opportunity to affect the source text in this manner).

It should also be mentioned that sprinkled in many of the contributions is the view that translation and post-editing are two very different tasks, and while a good post-editor is probably also a good translator, far from all translators find post-editing at all attractive.

If there is one perspective which I miss in this very rich book, however, it is the use of MT not for post-editing work – i.e. the use of MT is not requested by the client; it is simply used as an additional resource in a “normal” job. This is not the same task as PEMT! For one thing, you can choose among various MT engines; for another, it does not affect your pay. (But I must admit that the work itself is more or less similar to PEMT.)

Finally, I would strongly urge BDÜ to present this book all in English. While I am sure that most German-speaking readers have little problem with the English texts, I doubt that the reverse is true. And the book deserves a very wide readership. May I suggest to use the assistance of NMT?

There is also a brief glossary and presentations of the (30) contributors.

The MDÜ web site has a brief presentation of the book in German as well as sample pages – 18 of them, including the contents list.

Note: I had intended to provide a German version of this review as well, but in the end I refrained, since (a) my writing in German leaves a bit to be desired (even though I have no problem reading, and translating from, German), and (b) those German readers who are interested in this tome no doubt will have no problems reading this text in English.

Changing the Studio language settings

In addition to what I have written in the manual about changing the languages used in the Freelance edition of Studio, there is one other quite simple way of doing it. But it involves manipulating the Windows registry, so care should be taken when you do it. (Even before I wrote this, a corresponding instruction was published as a wiki post at the SDL Community without my being aware. Fortunately, we say the same thing although the wiki has more images.)

To be on the safe side (even though the change involved is very simple), you should first backup your registry. On the site How to backup the entire Registry on Windows 10, you will find detailed instructions on how to backup and restore the registry using system restore. You can also use a manual backup, which is described in How to back up and restore the registry in Windows. (I haven’t tried either, so I leave it to you do decide which is best.)

Anyway, hoping you won’t run into any problems, here is the procedure for using the registry to be able to change your Studio languages (all of them, if needed).

  1. Right-click the Start icon and select Search. Then enter regedit and open the registry (allowing the computer to make changes when asked). You can open it with or without administrator authority; the result will be the same.
  2. To be on the safe side, create a backup copy of the registry by selecting File > Export.
  3. Then lookup manually the address for making a change: HKEY_CURRENT_USER\Software\Microsoft\LSDRClient15
    (For Studio 2017 it is LSDRClient5.) You can also press Ctrl+F and search for LSDRClient15. And here is the folder:

 

 

 

  1. Right-click the folder name and select Rename.
  2. Name the folder, for instance, LDSRClient15_old.
  3. Close the registry.
  4. Open Studio. During the opening process you will be asked to select your five languages. (It once happened to me that this stage did not appear. I checked and found that, for some reason, my registry change hadn’t “taken”. When I did it again, I got the desired result.)
    That is the only change. Everything else is as before, e.g. all projects and AppStore plugins remain.

AppStore plugin names in different contexts

Once you have started to download and use AppStore plugins (you should!), you may notice that they often have different names in different contexts. Thus the useful appNotifications has an installation file called AppStoreIntegration; it is called appNotifications in the Plugin Management window and AppStoreIntegration in the Plug-ins list on the Add-ins ribbon. Normally this is something that you don’t need to care about, but as soon as you – for instance – want to locate a plugin in one of the lists, or check if you have already downloaded the installation file, you may be in trouble.

This list covers most of the plugins which to my mind are among the more useful ones, and it gives many – but far from all – of the various names. (The reason that the first column has more names than the others is that I plan to fill in the rest of the columns for those, too.)

The bottom list shows which plugins are already on the Plug-ins list in Studio itself before you add anything – but they are still not called System plug-ins. There is probably a reason for this, but I don’t know why.

Of course these things will change from time to time, so I will update the list now and then. This version is dated October, 2019. And the highlighted items are ones that I simply don’t know for certain what they are, so I have made guesses.

The list in pdf format is found here, and I have included the above text.

What are your default QA check settings?

This is a discussion held at the Studio Beta Group Forum. Since you have to be a member of that group to read it, and since it is quite interesting, I have obtained permission from the participants to publish it here.

Daniel Brockmann

Now that CU2 is out the door, I would love to hear from you on a specific topic. We are currently designing QA checks for the Online Editor environment, and “reinventing” them to some extent for that context. One question that came up was what typical default settings for QA checks are. Studio just has the forgotten check and nothing else enabled – which we believe is making its use a bit more difficult than if some other checks would already be available by default. So – against that background – can I ask you to reply here with the defaults you typically change? Obviously many of you also have specific RegEx checks etc. – maybe for those you can just say at a high level “I add my own regex checks” or so. A high-level list would be best.

Marco Rognoni 

I usually include the following:

Check for repeated words in target

Check that source and target end with the same punctuation

Check for multiple spaces

Claudio Nasso 

Regarding your QA checks question, these are my custom settings, compared to those already checked or unchecked by default:

  • Segment verification > Source and target are identical
  • Segments to Exclude > Exclude exact matches
  • Segments to Exclude > Exclude repetitions
  • Segments to Exclude > Exclude locked segment
  • Inconsistencies > Check for inconsistent translations
  • Inconsistencies > Check for repeated words in target
  • Inconsistencies > Check for unedited fuzzy matches
  • Punctuation > Check that source and target end with the same punctuation
  • Punctuation > Check for unintentional spaces before (applies to Italian, in my case)
  • Punctuation > Check for multiple spaces
  • Punctuation > Check for multiple dots
  • Punctuation > Check for multiple dots > Ignore ellipsis dots (…)
  • Punctuation > Check for extra space at the end of target segment
  • Punctuation > Check brackets
  • Numbers > Check numbers
  • Numbers > Check times
  • Numbers > Check dates
  • Numbers > Check measurements
  • Trademark check > Check trademarks characters
  • Length limitations > Check length limitation (only when necessary)
  • Tag verifier > Ignore formatting tags (in this case I uncheck it)
  • Verification settings > Ignore locked segments
  • Verification settings > Enable recognition of two-letters terms
  • Number verifier > Number verifier settings > Exclude tag text
  • Number verifier > Number verifier settings > All source thousands separators > Period (applies to Italian, in my case)
  • Number verifier > Number verifier settings > All decimal separators > Comma (applies to Italian, in my case)

Marco Rognoni 

Hi Claudio,

That’s a lot of QA checks! 🙂

Personally my experience is that by adding so many checks you always get a lot of errors, and it takes more time to verify each of them in Studio rather than manually check them during review/proofreading stages before delivery.

Of course each of us has a personal way of working, so I totally understand that you may prefer to have all those checks in place.

This shows that Daniel’s question is very interesting, and most likely each reply will be different based on the established method of every single linguist.

Claudio Nasso 

Hi Marco,

you are right, enabling all my proposed verification items may generate lot of errors/warnings/notes, but this is true only when the review/editing stages of a translated project have been carried out in an inadequate way.

When the reviewing/proofreading stages have been correctly carried out, the number of “errors/warnings/notes” will be much less, and they will further help us to spot those we have forgotten.

Moreover, after having set general custom QA checks, pairing them to the proper “signal” (I mean “Error”, “Warning” or “Note”), we have an option to show just the desired “signal”, or to disable some of them before running the “Verify” function on a particular project.

But, as you have pointed out, the choice of custom QA settings is tied to specific projects/requirements, and I agree with you that Daniel’s question is interesting because it will spot various working methods adopted by each colleague.

Tuomas Kostiainen 

Generally, I use the following checks:

  • Segment verification > Check for forgotten and empty translations
  • Segments to Exclude > Exclude PerfectMatch units
  • Segments to Exclude > Exclude locked segment
  • Inconsistencies > Check for inconsistent translations [Ignore tags and case]
  • Inconsistencies > Check for repeated words in target [Ignore numbers and case]
  • Punctuation > Check for unintentional spaces before [:!?;]
  • Punctuation > Check for multiple spaces
  • Punctuation > Check for multiple dots > Ignore ellipsis dots (…)
  • Punctuation > Check for extra space at the end of target segment
  • Regular Expressions > I use my own
  • Trademark check > Check trademark characters
  • (Length limitations > Check length limitation [only when necessary])
  • Tag verifier > All 5 tag checks AND Ignore formatting tags

(Copied and modifed from Claudio’s list — thank you!)

Frank Drefs 

We use the following settings:

Segment verification > Source and target are identical

  • Segments to Exclude > All deselected
  • Inconsistencies > Check for inconsistent translations (Ignore tags + Ignore case selected)
  • Inconsistencies > Check for repeated words in target (Ignore case selected)
  • Inconsistencies > Check for unedited fuzzy matches
  • Punctuation > Check for multiple dots
  • Punctuation > Check for multiple dots > Ignore ellipsis dots (…)
  • Tag verifier > All checks selected
  • Tag verifier > Ignore formatting tags

Claudia Alvis 

  • Segment verification > Check for forgotten and empty translations
  • Inconsistencies > Check for inconsistent translations [Ignore tags and case]
  • Inconsistencies > Check for repeated words in target [Ignore numbers]
  • Inconsistencies > Check for unedited fuzzy matches [All segments]
  • Punctuation [All checked]
  • Numbers [None checked]
  • Trademark check > Check trademarks characters
  • Length limitations > Check length limitation (only when necessary)
  • Tag verifier [Tags added, Deleted, Ghost tags]
  • Terminology verifier > Check for possible non-usage of target terms [min. match value 85%]
  • Terminology verifier > Check for terms which may have been set as forbidden
  • Terminology verifier > Ignore locked segments

What you want to know about Intento

Intento is a remarkable Studio plugin which gives you easy access to more than twenty MT providers. It’s use is not free (see below), but the charges are very reasonable.

Confidentiality

This is what is said in “Exhibit E” of the license agreement:

“We have two types of requests (from Customer to Intento and from Intento to Third-Party Services), and four types of Customer Data: request metadata, input data (request payload), user credentials (to external services, if necessary), and the data processing results (e.g. translated text or tags extracted from an image).” And: “The request metadata is everything contained in the request except input data and credentials, plus metadata derived from the payload.” These data may be deleted by user request.

Input data (source text) is stored for milliseconds from the reception of the request and submission to the MT provider. The same applies to the reception of the target text and the submission to the client.

But of course you also have to check the MT provider’s confidentiality terms to make sure that they fulfil your needs.

Account

To create an account, click Sign in and the follow the simple procedure. Once you have an account, you have access to all the necessary information at the Console Dashboard – in particular your API key for Production. (The Sandbox option – which is free – is not really for translation but for testing of the API integration.) You will need that key every time you make a new project setting using Intento, because unfortunately it is not possible to save it in the project settings. (Maybe in the future?)

Project Settings

The Intento Studio plugin is available both from the AppStore and from the Intento shared folder; the latter may be in a later development stage since the former needs reviewing by SDL before publishing. (Curiously, none of them appear on the Add-Ins > Plugins list and only the shared folder one appears in the Sdl Plugin Management window.

In Studio, open the Project Settings and then appropriate Translated Memory and Automated Translation settings. Then (as usual) click Use. The Intento MT Hub plugin is shown on the list as Intento MT Translation Provider; the other one is called Intento MT Provider. They open the following settings windows with the former first:

In both cases you must enter the API key I mentioned above and click the Check button. You need to do this every time you open this window; the key is not retained.

Once the key is approved, the Provider list is available. For some of the providers you can use your own credentials, which means that the charge will be made directly by that provider – otherwise Intento will charge on behalf of the provider. The “custom model” is applicable if you have your own customized MT model with the provider in question. “Payload Logging for 30 min” is meant for clients who may need logging (the default mode is “no trace”) in case there are issues to resolve.

Smart routing

Smart routing means that Intento makes the choice of MT provider for you, based on the latest Intento MT Benchmark. The price will be less than USD25 per 1M characters, and “full data protection” is provided, as well as high reliability.

Prices

There is ample information on prices in the license agreement (“Exhibits” A, C, and D), but you probably want to know before even starting the procedure of creating an account or setting up a project. Here is a brief overview. You will be charged for both Intento’s and the MT provider’s services.

Intento’s prices are as follows:

Amount of Machine Translation (characters) Price in USD for routing to MT provider
(per 1M characters)
up to and including 100M 5.00
>100M – 1B 4.00
> 1B 3.00

You pay per month or when the accumulated fee equals USD 5000.

For 1M characters – probably at least 100,000 words in a European language, except Finnish – I must say I could easily afford USD5.

MT providers’ prices are “Retail Recommended Prices for Third Party Services”, i.e. what you pay if you use that provider directly. This means that the charges for the MT provider services are the same whether you let Intento charge for them or you use your own credentials in the setup.

So there you have it. Apart from the question marks as concerns user information, I think this is an excellent service.

The AppStore at SDL

The SDL AppStore site is renovated from time to time. Also, not all functions on it are self-evident. So here is a brief orientation which may help you to utilize it without too much experimentation.

This is the start page:

 

 

 

 

 

 

 

 

 

Unless you are interested in web content management, select Language Solution Apps. You will be shown apps in these categories: Trending apps, Recently added / updated, Most popular for Studio 2019, and Apps for terminology.

You can to narrow your search by clicking the View all apps button, which leads to these filtering options:

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Your selections are shown above that menu, like in this example:

 

The apps that are shown can be sorted using these two buttons:

 

 

Clicking Top rated (or whatever that position says – i.e. your latest choice), you get a choice of Last updated – Most downloaded – Most recent – Most reviewed and Top rated.

With the arrow at right, you will have the apps sorted top down or the opposite (Last to Oldest or the reverse, Most to Least or the reverse, etc.).

Some words of explanation:

  • Last updated and Most recent seem to be the same. A “last updated” (or “recent”) app is an app that is either new, has newly been revised (with a new edition number), or has newly been revised even if the edition number is the same. A bit confusing.
  • Most reviewed is exactly that (although of course the reviews are not always positive; however, they sometimes contain valuable information, so it may pay to take a look).
  • Top rated are apparently the apps with best ratings. However, the rationale for the ordering of the non-reviewed apps is not clear.

As you can see, there are many similarities between this site and the AppStore window in Studio. And despite its advantages there is still room for improvement (apart from such minor matters as clarification of the Last updated/Most recent, Most reviewed and Top rated categories). And you still cannot sort the plugins by alphabetical order – but of course that is now the default presentation in the corresponding Studio window. For myself, I would also like to see such a simple thing as a designated space for the price of the paid apps.

How (un)safe is machine translation?

Note: This is a revised version of a text previously published at the eMpTy Pages blog under the heading “The Data Security Issues Around Public MT – A Translator Perspective”, with an extensive introduction by blog editor Kirti Vashee and some reader comments. This version is slightly updated.

Some time ago there were a couple of posts on this site discussing data security risks with machine translation (MT), notably by Kirti Vashee and by Christine Bruckner. Since they covered a lot of ground and might have created some confusion as to what security options are offered, I believe it may be useful to take a closer look with a more narrow perspective, mainly from the professional translator’s point of view. And although the starting point is the plugin applications for SDL Trados Studio, I know that most of these plugins are available also for other CAT tools.

About half a year ago, there was an uproar about Statoil’s discovery that some confidential material had become publicly available due to the fact that it had been translated with the help of a site called translate.com (not to be confused with translated.net, the site of the popular MT provider MyMemory). The story was reported in several places; this report gives good coverage.

Does this mean that all, or at least some, machine translation runs the risk of compromising the material being translated? Not necessarily – what happened to Statoil was the result of trying to get something for nothing; i.e. a free translation. The same thing happens when you use the free services of Google Translate and Microsoft’s Bing. Frequently quoted terms of use for those services state, for instance, that “you give Google a worldwide license to use, host, store, reproduce – – – such content”, and (for Bing): “When you share Your Content with other people, you understand that they may be able to, on a worldwide basis, use, save, record, reproduce – – – Your Content without compensating you”. This  should indeed be offputting to professional translators but should not be cited to scare them from using services for which those terms are not applicable.

The principle is this: If you use a free service, you can be almost certain that your text will be used to “improve the translation services provided”; i.e. parts of it may be shown to other users of the same service if they happen to feed the service with similar source segments. However, the terms of use of Google’s and Microsoft’s paid services – Google Cloud Translate API and Microsoft Text Translator API – are totally different from the free services. Not only can you select not to send back your finalized translations (i.e. update the provider’s data with your own translations); it is in fact not possible – at least not if you use Trados Studio – to do so.

Google and Microsoft are the big providers of MT services, but there are a number of others as well (MyMemory, DeepL, Lilt, Kantan, Systran, SDL Language Cloud…). In essence, the same principle applies to most of them. So let us have a closer look at how the paid services differ from the free.

Google’s and Microsoft’s paid services

Google states, as a reply to the question Will Google share the text I translate with others: “We will not make the content of the text that you translate available to the public, or share it with anyone else, except as necessary to provide the Translation API service. For example, sometimes we may need to use a third-party vendor to help us provide some aspect of our services, such as storage or transmission of data. We won’t share the text that you translate with any other parties, or make it public, for any other purpose.”

And here is the reply to the question after that, Will the text I send for translation, the translation itself, or other information about translation requests be stored on Google servers? If so, how long and where is the information kept?: “When you send Google text for translation, we must store that text for a short period of time in order to perform the translation and return the results to you. The stored text is typically deleted in a few hours, although occasionally we will retain it for longer while we perform debugging and other testing. Google also temporarily logs some metadata about translation requests (such as the time the request was received and the size of the request) to improve our service and combat abuse. For security and reliability, we distribute data storage across many machines in different locations.”

For Microsoft Text Translator API the information is more straightforward, on their “API and Hub: Confidentiality” page: “Microsoft does not share the data you submit for translation with anybody.” And on the “No-Trace” page: “Customer data submitted for translation through the Microsoft Translator Text API and the text translation features in Microsoft Office products are not written to persistent storage. There will be no record of the submitted text, or portion thereof, in any Microsoft data center. The text will not be used for training purposes either. – Note: Known previously as the “no trace option”, all traffic using the Microsoft Translator Text API (free or paid tiers) through any Azure subscription is now “no trace” by design. The previous requirement to have a minimum of 250 million characters per month to enable No-Trace is no longer applicable. In addition, the ability for Microsoft technical support to investigate any Translator Text API issues under your subscription is eliminated.

Other major players

As for DeepL, there is the same difference between free and paid services. For the former, it is stated – on their “Privacy Policy DeepL” page, under Texts and translations – DeepL Translator (free) – that “If you use our translation service, you transfer all texts you would like to transfer to our servers. This is required for us to perform the translation and to provide you with our service. We store your texts and the translation for a limited period of time in order to train and improve our translation algorithm. If you make corrections to our suggested translations, these corrections will also be transferred to our server in order to check the correction for accuracy and, if necessary, to update the translated text in accordance with your changes. We also store your corrections for a limited period of time in order to train and improve our translation algorithm.”

To the paid service, the following applies (stated on the same page but under Texts and translations – DeepL Pro): “When using DeepL Pro, the texts you submit and their translations are never stored, and are used only insofar as it is necessary to create the translation. When using DeepL Pro, we don’t use your texts to improve the quality of our services.” And interestingly enough, DeepL seems to consider their services to fulfil the requirements stipulated – currently as well as in the coming legislation – by the EU Commission (see below).

Lilt is a bit different in that it is free of charge, yet applies strict Data Security principles: “Your work is under your control. Translation suggestions are generated by Lilt using a combination of our parallel text and your personal translation resources. When you upload a translation memory or translate a document, those translations are only associated with your account. Translation memories can be shared across your projects, but they are not shared with other users or third parties.”

MyMemory – a very popular service which in fact is also free of charge, even though they use the paid services of Google, Microsoft and DeepL (but you cannot select the order in which those are used, nor can you opt out from using them at all) – uses also its own translation archives as well as offering the use of the translator’s private TMs. Your own TM material cannot be accessed by any other user, and as for MyMemory’s own archive, this is what they say, under Service Terms and Conditions of Use:

“We will not share, sell or transfer ’Personal Data’ to third parties without users’ express consent. We will not use ’Private Contributions’ to provide translation memory matches to other MyMemory’s users and we will not publish these contributions on MyMemory’s public archives. The contributions to the archive, whether they are ’Public Data’ or ’Private Data’, are collected, processed and used by Translated to create statistics, set up new services and improve existing ones.” One question here is of course what is implied by “improve” existing services. But MyMemory tells me that it means training their machine translation models, and that source segments are never used for this.

And this is what the SDL Language Cloud privacy policy says: “SDL will take reasonable efforts to safeguard your information from unauthorized access. – Source material will not be disclosed to third parties. Your term dictionaries are for your personal use only and are not shared with other users using SDL Language Cloud. – SDL may provide access to your information if SDL plc believes in good faith that disclosure is reasonably necessary to (1) comply with any applicable law, regulation or legal process, (2) detect or prevent fraud, and (3) address security or technical issues.”

Is this the whole truth?

Most of these terms of services are unambiguous, even Microsoft’s. But Google’s leaves room for interpretation – sometimes they “may need to use a third-party vendor to help us provide some aspect of [their] services”, and occasionally they “will retain [the text] for longer while [they] perform debugging and other testing”. The statement from MyMemory about improving existing services also raises questions, but I am told that this means training their machine translation models, and that source segments are never used for this. However, since MyMemory also utilizes Google Cloud Translate API (and you don’t know when), you need to take the same care with both MyMemory and Google.

There is also the problem with companies such as Google and Microsoft that you cannot get them to reply to questions if you want clarifications. And it is very difficult to verify the security provided, so that the “trust but verify” principle is all but impossible to implement (and not only with Google and Microsoft).

Note, however, that there are plugins for at least the major CAT tools that offer possibilities to anonymize (mask) data in the source text that you send to the Google and Microsoft paid services, which provides further security. This is also to some extent built into the MyMemory service.

But even if you never send back your translated target segments, what about the source data that you feed into the paid services? Are they deleted, or are they stored so that another user might hit upon them even if they are not connected to translated (target) text?

Yes and no. They are generally stored, but – also generally – in server logs, inaccessible to users and only kept for analysis purposes, mainly statistical. Cf. the statement from MyMemory.

My conclusion, therefore, is that as long as you do not return your own translations to the MT provider, and you use a paid service (or Lilt), and you anonymize any sensitive data, you should be safe. Of course, your client may forbid you to use such services anyway. If so, you can still use MT but offline; see below.

What about the European Union?

Then there is the particular case of translating for the European Union, and furthermore the provisions in the General Data Protection Regulation (GDPR), to enter into force on 25 May 2018. As for EU translations, the European Commission uses the following clause in their Tender specifications:

”Contractors intending to use web-based tools or any other web-based service (e.g. cloud computing) to execute the /framework contract/ must ensure full compliance with the terms of this call for tenders when using such services. In particular, the provisions on confidentiality must be respected throughout any web-based process and the Union’s intellectual and industrial property rights must be safeguarded at all times.” The commission considers the scope of this clause to be very broad, covering also the use of web-based translation tools.

A consequence of this is that translators are instructed not to use “open translation services” (beggars definition, does it not?) because of the risk of losing control over the contents. Instead, the Commission has its own MT-system, e-Translation. On the other hand, it seems possible that the DG Translation is not be quite up-to-date as concerns the current terms of service – quoted above – of Google Cloud Translate API and Microsoft Text Translation API, and if so, there may be a slight possibility that they might change their policy with regard to those services. But for now, the rule is that before a contractor uses web-based tools for a EU translation assignment, an authorisation to do so must be obtained (and so far, no such requests have been made).

As for the GDPR, it concerns mainly the protection of personal data, which may be a lesser problem generally for translators (at least if you don’t handle texts such as medical records, legal cases, etc.). In the words of Kamocki & Stauch on p. 72 of Machine Translation, “The user should generally avoid online MT services where he wishes to have information translated that concerns a third party (or is not sure whether it does or not)”. If you do handle personal data, you should forget about MT since the new regulation requires you to have a contract with the data processor (i.e. the MT service provider), and I doubt that for instance Google or Microsoft will be bothered.

Offline services and beyond

There are a number of MT programs intended for use offline (as plugins in CAT tools), which of course provides the best possible security (apart from the fact that transfer back and forth via email always constitutes a theoretical risk, which some clients try to eliminate by using specialized transfer sites). The drawback – apart from the fact that being limited to your own TMs – is that they tend to be pretty expensive to purchase.

The ones that I have found (based on investigations of plugins for SDL Trados Studio) are, primarily, Slate Desktop translation provider, Transistent API Connector, and Tayou Machine Translation Plugin. I should add that so far in this article I have only looked at MT providers which are based on providers of statistical machine translation or its further development, neural machine translation. But it seems that one offline contender which for some language combinations (involving English) also offers pretty good “services” is the rule-based PROMT Master 18.

However, in conclusion I would say that if we take the privacy statements from the MT providers at face value – and I do believe we can, even when we cannot verify them – then for most purposes the paid translation services mentioned above should be safe to use, particularly if you take care not to pass back your own translations. But still I think both translators and their clients would do well to study the risks described and advice given by Don DePalma in this article. Its topic is free MT, but any translation service provider who wants to be honest in the relationship with the clients, while taking advantage of even paid MT, would do well to study it.

The many faces of post-editing

Note 1: This is a revised version of a text previously published at the eMpTy Pages blog under the heading “Post-editing” – what does it really mean?”. This version is more up-to-date (and slightly enlarged), but the blog post is followed by several interesting comments not included here.

Note 2: This new version, published on 2 April, is a very much revised version of the one previously published here.

You might wonder in what way editing of hits in a CAT tool’s TM is different from editing of ”hits” in an MT engine. Because if there was not a clear difference there would not be any reason to invent a particular term for the latter.

But the term ”post-editing” is established since the 1980s, so there should be something to it.[1] And the way I see the difference is this: Certainly for many years, Déjà Vu – in particular – but also memoQ and perhaps also other CAT tools have cleverly put together TM fragments into more or less complete target segment translations, but they can almost never pre-translate whole documents, something which MT can do. It is true that the MT-translated target text might be almost totally useless, but the point is that a client might come to you and say: Here is a machine translated document; could you go through it and produce a useful result? Or: Here is a French document; could you run it in this MT motor and edit the target segments into good German? The expectation, of course, being that the use of MT will make the translation cost less.

The difference lies not so much in the job itself – even if many people say that post-editing of MT-translated texts is rather much different from the ”ususal” translation in a CAT tool with one or more TMs, or without one – as the fact that an MT produces a complete translation – usable or not – of  (in theory at least) every piece of source text.

Post-editing also means that the client requests use of an MT motor (or has already used it). If I, in a normal job, use MT and even produce the whole translation by editing MT-translated segments, that’s a different case (and one which does not concern anyone but me, provided that confidentiality is not compromised in any way). Another factor to consider is whether the translation task concerns a pre-translated document or the translator translates the text segment by segment in the usual way but using the suggestions from an assigned MT. I’ll discuss that later in this article.

The matter of quality

In a post-editing job, a level of quality is also specified – the client wants a translation which is good enough for its purposes but does not want to pay for one that is “unnecessarily” good. Therefore the following quality levels have been defined, in the ISO standard 18587:2017, Translation services – Post-editing of machine translation output – Requirements and other documents:

  1. Light post-editing (also called “gisting”): The final text is understandable and correct as to content, but the editor need not – and should not – strive for a text much better than that; s/he should use as much as possible of the unedited MT version.
  2. Full post-editing: The result, according to some definitions, should be ”indistinguishable from human output” (in the words of ISO 18587), or ”publishable”. But there are are conflicting views on this: some sources say that stylistic perfection is not expected and that clients actually do not expect the result to be comparable to “human” translation. ”Do not worry too much about style, standards of textuality” and ”Quality expectations: medium”.[2] And: ”Texts that are post-edited should not strive for linguistic perfection; instead, the goal should be linguistic adequacy.”[3]

Looking past the definitions in the standard and instead using definitions of what I perceive as practice, I would say that there are in fact three levels of post-editing MT output: At the top there is the result ”indistinguishable from human output”; i.e. it is impossible to tell whether MT has been used or not. Slightly below that, there is the ”full” post-editing: correct in all regards but perhaps not top-level stylistically. And then there is the ”ligth” level: useable for understanding but not more (not much fun to read).

Of course these categories are only points on a continuous scale; it is difficult to objectively test that a PEMT text fulfils the criteria of one or the other. (Is the light version really not above the target level? Is the full version really up to the requirements? Has the client specified what type of full version is required?).

There are some interesting research results as to the efforts involved, insights which may be of help to the would-be editor. Thus it seems that editing medium quality MT (at all levels) takes more effort than editing poor ones – it is cognitively more demanding than discarding and rewriting the text. Also the effort needed to detect an error and decide how to correct it may be greater than the rewriting itself; and reordering words and correcting mistranslated words takes the longest time of all.

There is also interesting research which shows that a translation’s “fluency” – in the eyes of the post-editor – trumps “correctness”[4], and that a translation which contains a preferred wording but is in fact incorrect will pass, while a correct translation will often be edited to include a preferred wording.[5]

An additional aspect is that all jobs involving “light” quality is likely to be avoided by most translators since it goes against the grain of everything a translator finds joy in doing, i.e. the best job possible. Experience also shows that all the many decisions that have to be made regarding which changes need to be made and which not often take so much time that the total effort with “light” quality editing is not much less than that with “full” (or even ”best”) quality.

Pre-translation or interactive editing?

Then there is the question of whether the job involves a pre-translated document or segment by segment “interactive” translation in a CAT tool with the aid of an MT motor. I have read many articles and presentations and even dissertations on post-editing, and strangely, very few of them have made this distinction. In fact, in many of them it seems as if the authors are primarily thinking of the latter (and the vast majority of the research is done on interactive work). Furthermore, most descriptions or definitions of “post-editing” do not seem to take into account any such distinction. All the more reason, then, to welcome the following definition in ISO 17100:2015, Translation services – Requirements for translation services:

post-edit

edit and correct machine translation output

Note: This definition means that the post-editor will edit output automatically generated by a machine translation engine. It does not refer to a situation where a translator sees and uses a suggestion from a machine translation engine within a CAT (computer-aided translation) tool.

And yet… in ISO 18587, Translation services – Post-editing of machine translation output – Requirements, we are back in the uncertain state: the above note has been removed, and there are no clues as to whether the standard makes any difference between the two ways of producing the target text to be edited.

This may be reasonable in view of the fact that the requirements on the “post-editor” arguably are the same in both cases (and it seems that was the rationale for the decision to delete the note). And it may not matter to the quality of the work performed or the results achieved. But it matters a great deal to the translator doing the work. Basically, there are three possible job scenarios:

  1. The job consists of editing (“post-editing”) a complete document which has been machine-translated; the source document is attached, and the client defines the desired level of quality. The editor (usually an experienced translator) can reasonably well assess the quality of the translation and based on that make an offer. The offer should take into account any necessary adaptation of the source and target texts for handling in a CAT tool.
  2. The job is very much like a normal translation in a CAT tool except that instead of, or in addition to, an accompanying TM the translator is assigned an MT engine by the client (usually a translation agency). Here, too, a level of quality is defined. The agency may have a template (similar to the “Trados grid”) for payment, or simply a standard level related to the payment for “normal” translation – normally 60%. But usually it is not possible for the translator to assess in advance the time required (partly because there is still no method for judging in advance the quality of an MT engine).
  3. The same as B, but the payment is based on a post-analysis[6] of the edited file and depends on how much use has been made of the MT (and, as the case may be, the TM) suggestions. As in B, it is not possible to assess the time required, nor in this scenario the final payment. Also, s/he may not know how the post-analysis is made, in which case the final compensation will be based on trust. (And, of course, if this method of basing payment on a post-assessment of the job done becomes accepted, one can easily foresee it being applied as well to traditional jobs using CAT tools in combination with TMs, without machine translation.)

In addition to this, there are differences between scenarios A and B/C in how the work is done. For instance, in A you can use Find & replace to make changes in all target segments; not so in B/C (unless you start by pre-translating the whole text using MT) – but there you may have some assistance by various other functions offered by the CAT tool and also by using Regular expressions (regex). And if it’s a big job, it might be worthwile, in scenario A, to create a TM based on the texts and then redo the translation using that TM plus any suitable CAT tool features (and regex). And so on.

What about the future?

I was given an interesting view of the development of translation work is given by Arle Lommel, senior analyst at CSA Research and an expert in the field. It goes like this:

A major shift right now is that post-editing is being replaced by “augmented translation.” In this view, language professionals don’t correct MT, but instead use it as a resource alongside TM and terminology. This means that buyers will increasingly just look for translation, rather than distinguishing between machine and human translation. They will just buy “translation” and the expectation will be that MT will be used if it makes sense. The MT component of this approach is already visible in tools from Lilt, SDL, and others, but we’re still in the early days of this change.

Here, “light” post-editing is not even in the picure; I am not aware of today’s demand but I can imagine that this type of work in future will be of interest mainly for larger companies, which then probably can handle it internally.

If mr Lommel is correct, it probably means that we can stop using the “post-editing” misnomer as we do today – editing is editing, regardless of whether the suggestion presented in the CAT tool interface comes from a TM or an MT engine. (This erasing of boundaries is well described by Sharon O’Brien.[7]) It should be reserved only for the very specific case of scenario A. This view is taken in, among others, the contributions by a post-editor educator and an experienced post-editor in the recently published Machine Translation – What Language Professionals Need to Know[8],[9].

My own view of the future is as follows (without any figures at all to back it up):

As neural MT – and “adaptive” statistical MT – produces better and better results, and this becomes known (as it will be) by clients such as big companies and translation agencies, the situation described by mr Lommel will also mean that prices will be forced further down as productivity rises. But this may not be all doom and gloom, or as this quotation states:

The translators of tomorrow will have more in common with skilled engineers than with today’s linguists, who operate in a craft-driven model. The will wiels an array of technologies that amplify their ability and the will be able to focus on those aspects that require human intelligence and understanding, while leaving routine tasks to MT.[10]

(Or, as someone put it, “machine translation will only replace those who translate like machines”.)

Thus it seems that “post-editing” as a service is likely to disappear. But the task of editing MT output will remain, although looking more and more like the task of editing/reviewing the output of a fellow translator. This probably also means that the need for special “post-editors”, as well as a corresponding special training, will disappear – although it will always remain a fact that some translators will avoid the job of editing while others enjoy it. And certainly editing/reviewing merits a place in the education of translators – many of the shortcomings of post-editors found in the research are obviously not primarily caused by the fact that the text-to-be-edited comes from MT.

And while we await that time, very far away I believe, when the only need for human translators will be for translating fiction, we translators of today should strive to make the best possible use of the situation where MT, even if not required by the client, is a resource to be used, or not, as any (other) TM. It is here and will not go away even if some people would wish it to. Or put another way:

[NMT] represents a major breakthrough that [Langugage Service Providers] and their clients should actively investigate. Those that wait will find themselves at a disadvantage.[11]

References:

[1] Maybe a clue is to be found in this statement: “Post-editing should not be confused with pre-editing.” Although how such a confusion might arise I don’t understand. (From Trusted Translations, https://www.trustedtranslations.com/translation-services/post-editing.asp.)

[2] O’Brien, Sharon, Roturier, Johann, and de Almeida, Giselle (2009): Post-Editing MT Output. CNGL. http://www.mt-archive.info/MTS-2009-OBrien-ppt.pdf

[3] Hansen-Schirra, Silvia, Schaeffer, Moritz, and Nitke, Jean (2017). Post-editing: strategies, quality, efficiency. In Porsiel (ed.): Machine Translation. Berlin: BDÜ Fachverlag.

[4] Martindale, Marianna J. & Carpuat, Marine (2018): Fluency Over Adequacy: A Pilot Study in Measuring User Trust in Imperfect MT. https://arxiv.org/pdf/1802.06041.pdf

[5] Koponen, Maarit (2013): This translation is not too bad: An analysis of post-editor choices in a machine translation post-editing tas. In Proceedings of MT Summit XIV Workshop on Post-editing Technology and Practice. https://pdfs.semanticscholar.org/b659/ec47ebf3d05fe38ada7ed45d3afd54434d74.pdf

[6] See for instance Memsource’s ”Post-editing Analysis”, https://help.memsource.com/hc/en-us/articles/115003942912-Post-editing-Analysis

[7] O’Brien, Sharon (2016): Post-Editing and CAT. In: 2016 48 EST Newsletter. https://issuu.com/est.newsletter/docs/2016_48-est

[8] Hansen-Schirra, Schaeffer, Nitke, ibid.

[9] Grizzo, Sara (2017): Working as a post-editor: a field report. In Porsiel (ed.): Machine Translation. Berlin: BDÜ Fachverlag.

[10] Lommel, Burchardt and Macketanz (2018): Will neural technology drive MT into the mainstream? In MultiLingual, January 2018. http://dig.multilingual.com/2018-01/index.html?page=0

[11] Lommel, Burchardt and Macketanz, ibid.

 

Trados Studio apps/plugins for machine translation

For a long time now, I’ve been intrigued by the very large number of apps/plugins in the Studio appstore which give access – free or paid – to various types of machine translation services and facilities. Since I have lately found that the use of MT may give surprisingly good results at least for En > Sv (as well as completely useless ones), I was curious to know more about all these various options. Here is a brief overview of what I found trying to explore them to the best of my ability. (Because of the shitty layout – to be revised – of this site I cannot include the table on this page, but I trust you will be as well served by the separate page.)

This material is included in the Studio 2017 manual but for the most important entries there are much more in-depth descriptions there. However, this overview is updated more often than the manual.

Powered by WordPress | Designed by: backlink indexing | Thanks to Mens Wallets, warcraft gold and buy backlinks