How (un)safe is machine translation?

Note: This is a revised version of a text previously published at the eMpTy Pages blog under the heading “The Data Security Issues Around Public MT – A Translator Perspective”, with an extensive introduction by blog editor Kirti Vashee and some reader comments. This version is slightly updated.

Some time ago there were a couple of posts on this site discussing data security risks with machine translation (MT), notably by Kirti Vashee and by Christine Bruckner. Since they covered a lot of ground and might have created some confusion as to what security options are offered, I believe it may be useful to take a closer look with a more narrow perspective, mainly from the professional translator’s point of view. And although the starting point is the plugin applications for SDL Trados Studio, I know that most of these plugins are available also for other CAT tools.

About half a year ago, there was an uproar about Statoil’s discovery that some confidential material had become publicly available due to the fact that it had been translated with the help of a site called translate.com (not to be confused with translated.net, the site of the popular MT provider MyMemory). The story was reported in several places; this report gives good coverage.

Does this mean that all, or at least some, machine translation runs the risk of compromising the material being translated? Not necessarily – what happened to Statoil was the result of trying to get something for nothing; i.e. a free translation. The same thing happens when you use the free services of Google Translate and Microsoft’s Bing. Frequently quoted terms of use for those services state, for instance, that “you give Google a worldwide license to use, host, store, reproduce – – – such content”, and (for Bing): “When you share Your Content with other people, you understand that they may be able to, on a worldwide basis, use, save, record, reproduce – – – Your Content without compensating you”. This  should indeed be offputting to professional translators but should not be cited to scare them from using services for which those terms are not applicable.

The principle is this: If you use a free service, you can be almost certain that your text will be used to “improve the translation services provided”; i.e. parts of it may be shown to other users of the same service if they happen to feed the service with similar source segments. However, the terms of use of Google’s and Microsoft’s paid services – Google Cloud Translate API and Microsoft Text Translator API – are totally different from the free services. Not only can you select not to send back your finalized translations (i.e. update the provider’s data with your own translations); it is in fact not possible – at least not if you use Trados Studio – to do so.

Google and Microsoft are the big providers of MT services, but there are a number of others as well (MyMemory, DeepL, Lilt, Kantan, Systran, SDL Language Cloud…). In essence, the same principle applies to most of them. So let us have a closer look at how the paid services differ from the free.

Google’s and Microsoft’s paid services

Google states, as a reply to the question Will Google share the text I translate with others: “We will not make the content of the text that you translate available to the public, or share it with anyone else, except as necessary to provide the Translation API service. For example, sometimes we may need to use a third-party vendor to help us provide some aspect of our services, such as storage or transmission of data. We won’t share the text that you translate with any other parties, or make it public, for any other purpose.”

And here is the reply to the question after that, Will the text I send for translation, the translation itself, or other information about translation requests be stored on Google servers? If so, how long and where is the information kept?: “When you send Google text for translation, we must store that text for a short period of time in order to perform the translation and return the results to you. The stored text is typically deleted in a few hours, although occasionally we will retain it for longer while we perform debugging and other testing. Google also temporarily logs some metadata about translation requests (such as the time the request was received and the size of the request) to improve our service and combat abuse. For security and reliability, we distribute data storage across many machines in different locations.”

For Microsoft Text Translator API the information is more straightforward, on their “API and Hub: Confidentiality” page: “Microsoft does not share the data you submit for translation with anybody.” And on the “No-Trace” page: “Customer data submitted for translation through the Microsoft Translator Text API and the text translation features in Microsoft Office products are not written to persistent storage. There will be no record of the submitted text, or portion thereof, in any Microsoft data center. The text will not be used for training purposes either. – Note: Known previously as the “no trace option”, all traffic using the Microsoft Translator Text API (free or paid tiers) through any Azure subscription is now “no trace” by design. The previous requirement to have a minimum of 250 million characters per month to enable No-Trace is no longer applicable. In addition, the ability for Microsoft technical support to investigate any Translator Text API issues under your subscription is eliminated.

Other major players

As for DeepL, there is the same difference between free and paid services. For the former, it is stated – on their “Privacy Policy DeepL” page, under Texts and translations – DeepL Translator (free) – that “If you use our translation service, you transfer all texts you would like to transfer to our servers. This is required for us to perform the translation and to provide you with our service. We store your texts and the translation for a limited period of time in order to train and improve our translation algorithm. If you make corrections to our suggested translations, these corrections will also be transferred to our server in order to check the correction for accuracy and, if necessary, to update the translated text in accordance with your changes. We also store your corrections for a limited period of time in order to train and improve our translation algorithm.”

To the paid service, the following applies (stated on the same page but under Texts and translations – DeepL Pro): “When using DeepL Pro, the texts you submit and their translations are never stored, and are used only insofar as it is necessary to create the translation. When using DeepL Pro, we don’t use your texts to improve the quality of our services.” And interestingly enough, DeepL seems to consider their services to fulfil the requirements stipulated – currently as well as in the coming legislation – by the EU Commission (see below).

Lilt is a bit different in that it is free of charge, yet applies strict Data Security principles: “Your work is under your control. Translation suggestions are generated by Lilt using a combination of our parallel text and your personal translation resources. When you upload a translation memory or translate a document, those translations are only associated with your account. Translation memories can be shared across your projects, but they are not shared with other users or third parties.”

MyMemory – a very popular service which in fact is also free of charge, even though they use the paid services of Google, Microsoft and DeepL (but you cannot select the order in which those are used, nor can you opt out from using them at all) – uses also its own translation archives as well as offering the use of the translator’s private TMs. Your own TM material cannot be accessed by any other user, and as for MyMemory’s own archive, this is what they say, under Service Terms and Conditions of Use:

“We will not share, sell or transfer ’Personal Data’ to third parties without users’ express consent. We will not use ’Private Contributions’ to provide translation memory matches to other MyMemory’s users and we will not publish these contributions on MyMemory’s public archives. The contributions to the archive, whether they are ’Public Data’ or ’Private Data’, are collected, processed and used by Translated to create statistics, set up new services and improve existing ones.” One question here is of course what is implied by “improve” existing services. But MyMemory tells me that it means training their machine translation models, and that source segments are never used for this.

And this is what the SDL Language Cloud privacy policy says: “SDL will take reasonable efforts to safeguard your information from unauthorized access. – Source material will not be disclosed to third parties. Your term dictionaries are for your personal use only and are not shared with other users using SDL Language Cloud. – SDL may provide access to your information if SDL plc believes in good faith that disclosure is reasonably necessary to (1) comply with any applicable law, regulation or legal process, (2) detect or prevent fraud, and (3) address security or technical issues.”

Is this the whole truth?

Most of these terms of services are unambiguous, even Microsoft’s. But Google’s leaves room for interpretation – sometimes they “may need to use a third-party vendor to help us provide some aspect of [their] services”, and occasionally they “will retain [the text] for longer while [they] perform debugging and other testing”. The statement from MyMemory about improving existing services also raises questions, but I am told that this means training their machine translation models, and that source segments are never used for this. However, since MyMemory also utilizes Google Cloud Translate API (and you don’t know when), you need to take the same care with both MyMemory and Google.

There is also the problem with companies such as Google and Microsoft that you cannot get them to reply to questions if you want clarifications. And it is very difficult to verify the security provided, so that the “trust but verify” principle is all but impossible to implement (and not only with Google and Microsoft).

Note, however, that there are plugins for at least the major CAT tools that offer possibilities to anonymize (mask) data in the source text that you send to the Google and Microsoft paid services, which provides further security. This is also to some extent built into the MyMemory service.

But even if you never send back your translated target segments, what about the source data that you feed into the paid services? Are they deleted, or are they stored so that another user might hit upon them even if they are not connected to translated (target) text?

Yes and no. They are generally stored, but – also generally – in server logs, inaccessible to users and only kept for analysis purposes, mainly statistical. Cf. the statement from MyMemory.

My conclusion, therefore, is that as long as you do not return your own translations to the MT provider, and you use a paid service (or Lilt), and you anonymize any sensitive data, you should be safe. Of course, your client may forbid you to use such services anyway. If so, you can still use MT but offline; see below.

What about the European Union?

Then there is the particular case of translating for the European Union, and furthermore the provisions in the General Data Protection Regulation (GDPR), to enter into force on 25 May 2018. As for EU translations, the European Commission uses the following clause in their Tender specifications:

”Contractors intending to use web-based tools or any other web-based service (e.g. cloud computing) to execute the /framework contract/ must ensure full compliance with the terms of this call for tenders when using such services. In particular, the provisions on confidentiality must be respected throughout any web-based process and the Union’s intellectual and industrial property rights must be safeguarded at all times.” The commission considers the scope of this clause to be very broad, covering also the use of web-based translation tools.

A consequence of this is that translators are instructed not to use “open translation services” (beggars definition, does it not?) because of the risk of losing control over the contents. Instead, the Commission has its own MT-system, e-Translation. On the other hand, it seems possible that the DG Translation is not be quite up-to-date as concerns the current terms of service – quoted above – of Google Cloud Translate API and Microsoft Text Translation API, and if so, there may be a slight possibility that they might change their policy with regard to those services. But for now, the rule is that before a contractor uses web-based tools for a EU translation assignment, an authorisation to do so must be obtained (and so far, no such requests have been made).

As for the GDPR, it concerns mainly the protection of personal data, which may be a lesser problem generally for translators (at least if you don’t handle texts such as medical records, legal cases, etc.). In the words of Kamocki & Stauch on p. 72 of Machine Translation, “The user should generally avoid online MT services where he wishes to have information translated that concerns a third party (or is not sure whether it does or not)”. If you do handle personal data, you should forget about MT since the new regulation requires you to have a contract with the data processor (i.e. the MT service provider), and I doubt that for instance Google or Microsoft will be bothered.

Offline services and beyond

There are a number of MT programs intended for use offline (as plugins in CAT tools), which of course provides the best possible security (apart from the fact that transfer back and forth via email always constitutes a theoretical risk, which some clients try to eliminate by using specialized transfer sites). The drawback – apart from the fact that being limited to your own TMs – is that they tend to be pretty expensive to purchase.

The ones that I have found (based on investigations of plugins for SDL Trados Studio) are, primarily, Slate Desktop translation provider, Transistent API Connector, and Tayou Machine Translation Plugin. I should add that so far in this article I have only looked at MT providers which are based on providers of statistical machine translation or its further development, neural machine translation. But it seems that one offline contender which for some language combinations (involving English) also offers pretty good “services” is the rule-based PROMT Master 18.

However, in conclusion I would say that if we take the privacy statements from the MT providers at face value – and I do believe we can, even when we cannot verify them – then for most purposes the paid translation services mentioned above should be safe to use, particularly if you take care not to pass back your own translations. But still I think both translators and their clients would do well to study the risks described and advice given by Don DePalma in this article. Its topic is free MT, but any translation service provider who wants to be honest in the relationship with the clients, while taking advantage of even paid MT, would do well to study it.

The many faces of post-editing

Note 1: This is a revised version of a text previously published at the eMpTy Pages blog under the heading “Post-editing” – what does it really mean?”. This version is more up-to-date (and slightly enlarged), but the blog post is followed by several interesting comments not included here.

Note 2: This new version, published on 2 April, is a very much revised version of the one previously published here.

You might wonder in what way editing of hits in a CAT tool’s TM is different from editing of ”hits” in an MT engine. Because if there was not a clear difference there would not be any reason to invent a particular term for the latter.

But the term ”post-editing” is established since the 1980s, so there should be something to it.[1] And the way I see the difference is this: Certainly for many years, Déjà Vu – in particular – but also memoQ and perhaps also other CAT tools have cleverly put together TM fragments into more or less complete target segment translations, but they can almost never pre-translate whole documents, something which MT can do. It is true that the MT-translated target text might be almost totally useless, but the point is that a client might come to you and say: Here is a machine translated document; could you go through it and produce a useful result? Or: Here is a French document; could you run it in this MT motor and edit the target segments into good German? The expectation, of course, being that the use of MT will make the translation cost less.

The difference lies not so much in the job itself – even if many people say that post-editing of MT-translated texts is rather much different from the ”ususal” translation in a CAT tool with one or more TMs, or without one – as the fact that an MT produces a complete translation – usable or not – of  (in theory at least) every piece of source text.

Post-editing also means that the client requests use of an MT motor (or has already used it). If I, in a normal job, use MT and even produce the whole translation by editing MT-translated segments, that’s a different case (and one which does not concern anyone but me, provided that confidentiality is not compromised in any way). Another factor to consider is whether the translation task concerns a pre-translated document or the translator translates the text segment by segment in the usual way but using the suggestions from an assigned MT. I’ll discuss that later in this article.

The matter of quality

In a post-editing job, a level of quality is also specified – the client wants a translation which is good enough for its purposes but does not want to pay for one that is “unnecessarily” good. Therefore the following quality levels have been defined, in the ISO standard 18587:2017, Translation services – Post-editing of machine translation output – Requirements and other documents:

  1. Light post-editing (also called “gisting”): The final text is understandable and correct as to content, but the editor need not – and should not – strive for a text much better than that; s/he should use as much as possible of the unedited MT version.
  2. Full post-editing: The result, according to some definitions, should be ”indistinguishable from human output” (in the words of ISO 18587), or ”publishable”. But there are are conflicting views on this: some sources say that stylistic perfection is not expected and that clients actually do not expect the result to be comparable to “human” translation. ”Do not worry too much about style, standards of textuality” and ”Quality expectations: medium”.[2] And: ”Texts that are post-edited should not strive for linguistic perfection; instead, the goal should be linguistic adequacy.”[3]

Looking past the definitions in the standard and instead using definitions of what I perceive as practice, I would say that there are in fact three levels of post-editing MT output: At the top there is the result ”indistinguishable from human output”; i.e. it is impossible to tell whether MT has been used or not. Slightly below that, there is the ”full” post-editing: correct in all regards but perhaps not top-level stylistically. And then there is the ”ligth” level: useable for understanding but not more (not much fun to read).

Of course these categories are only points on a continuous scale; it is difficult to objectively test that a PEMT text fulfils the criteria of one or the other. (Is the light version really not above the target level? Is the full version really up to the requirements? Has the client specified what type of full version is required?).

There are some interesting research results as to the efforts involved, insights which may be of help to the would-be editor. Thus it seems that editing medium quality MT (at all levels) takes more effort than editing poor ones – it is cognitively more demanding than discarding and rewriting the text. Also the effort needed to detect an error and decide how to correct it may be greater than the rewriting itself; and reordering words and correcting mistranslated words takes the longest time of all.

There is also interesting research which shows that a translation’s “fluency” – in the eyes of the post-editor – trumps “correctness”[4], and that a translation which contains a preferred wording but is in fact incorrect will pass, while a correct translation will often be edited to include a preferred wording.[5]

An additional aspect is that all jobs involving “light” quality is likely to be avoided by most translators since it goes against the grain of everything a translator finds joy in doing, i.e. the best job possible. Experience also shows that all the many decisions that have to be made regarding which changes need to be made and which not often take so much time that the total effort with “light” quality editing is not much less than that with “full” (or even ”best”) quality.

Pre-translation or interactive editing?

Then there is the question of whether the job involves a pre-translated document or segment by segment “interactive” translation in a CAT tool with the aid of an MT motor. I have read many articles and presentations and even dissertations on post-editing, and strangely, very few of them have made this distinction. In fact, in many of them it seems as if the authors are primarily thinking of the latter (and the vast majority of the research is done on interactive work). Furthermore, most descriptions or definitions of “post-editing” do not seem to take into account any such distinction. All the more reason, then, to welcome the following definition in ISO 17100:2015, Translation services – Requirements for translation services:

post-edit

edit and correct machine translation output

Note: This definition means that the post-editor will edit output automatically generated by a machine translation engine. It does not refer to a situation where a translator sees and uses a suggestion from a machine translation engine within a CAT (computer-aided translation) tool.

And yet… in ISO 18587, Translation services – Post-editing of machine translation output – Requirements, we are back in the uncertain state: the above note has been removed, and there are no clues as to whether the standard makes any difference between the two ways of producing the target text to be edited.

This may be reasonable in view of the fact that the requirements on the “post-editor” arguably are the same in both cases (and it seems that was the rationale for the decision to delete the note). And it may not matter to the quality of the work performed or the results achieved. But it matters a great deal to the translator doing the work. Basically, there are three possible job scenarios:

  1. The job consists of editing (“post-editing”) a complete document which has been machine-translated; the source document is attached, and the client defines the desired level of quality. The editor (usually an experienced translator) can reasonably well assess the quality of the translation and based on that make an offer. The offer should take into account any necessary adaptation of the source and target texts for handling in a CAT tool.
  2. The job is very much like a normal translation in a CAT tool except that instead of, or in addition to, an accompanying TM the translator is assigned an MT engine by the client (usually a translation agency). Here, too, a level of quality is defined. The agency may have a template (similar to the “Trados grid”) for payment, or simply a standard level related to the payment for “normal” translation – normally 60%. But usually it is not possible for the translator to assess in advance the time required (partly because there is still no method for judging in advance the quality of an MT engine).
  3. The same as B, but the payment is based on a post-analysis[6] of the edited file and depends on how much use has been made of the MT (and, as the case may be, the TM) suggestions. As in B, it is not possible to assess the time required, nor in this scenario the final payment. Also, s/he may not know how the post-analysis is made, in which case the final compensation will be based on trust. (And, of course, if this method of basing payment on a post-assessment of the job done becomes accepted, one can easily foresee it being applied as well to traditional jobs using CAT tools in combination with TMs, without machine translation.)

In addition to this, there are differences between scenarios A and B/C in how the work is done. For instance, in A you can use Find & replace to make changes in all target segments; not so in B/C (unless you start by pre-translating the whole text using MT) – but there you may have some assistance by various other functions offered by the CAT tool and also by using Regular expressions (regex). And if it’s a big job, it might be worthwile, in scenario A, to create a TM based on the texts and then redo the translation using that TM plus any suitable CAT tool features (and regex). And so on.

What about the future?

I was given an interesting view of the development of translation work is given by Arle Lommel, senior analyst at CSA Research and an expert in the field. It goes like this:

A major shift right now is that post-editing is being replaced by “augmented translation.” In this view, language professionals don’t correct MT, but instead use it as a resource alongside TM and terminology. This means that buyers will increasingly just look for translation, rather than distinguishing between machine and human translation. They will just buy “translation” and the expectation will be that MT will be used if it makes sense. The MT component of this approach is already visible in tools from Lilt, SDL, and others, but we’re still in the early days of this change.

Here, “light” post-editing is not even in the picure; I am not aware of today’s demand but I can imagine that this type of work in future will be of interest mainly for larger companies, which then probably can handle it internally.

If mr Lommel is correct, it probably means that we can stop using the “post-editing” misnomer as we do today – editing is editing, regardless of whether the suggestion presented in the CAT tool interface comes from a TM or an MT engine. (This erasing of boundaries is well described by Sharon O’Brien.[7]) It should be reserved only for the very specific case of scenario A. This view is taken in, among others, the contributions by a post-editor educator and an experienced post-editor in the recently published Machine Translation – What Language Professionals Need to Know[8],[9].

My own view of the future is as follows (without any figures at all to back it up):

As neural MT – and “adaptive” statistical MT – produces better and better results, and this becomes known (as it will be) by clients such as big companies and translation agencies, the situation described by mr Lommel will also mean that prices will be forced further down as productivity rises. But this may not be all doom and gloom, or as this quotation states:

The translators of tomorrow will have more in common with skilled engineers than with today’s linguists, who operate in a craft-driven model. The will wiels an array of technologies that amplify their ability and the will be able to focus on those aspects that require human intelligence and understanding, while leaving routine tasks to MT.[10]

(Or, as someone put it, “machine translation will only replace those who translate like machines”.)

Thus it seems that “post-editing” as a service is likely to disappear. But the task of editing MT output will remain, although looking more and more like the task of editing/reviewing the output of a fellow translator. This probably also means that the need for special “post-editors”, as well as a corresponding special training, will disappear – although it will always remain a fact that some translators will avoid the job of editing while others enjoy it. And certainly editing/reviewing merits a place in the education of translators – many of the shortcomings of post-editors found in the research are obviously not primarily caused by the fact that the text-to-be-edited comes from MT.

And while we await that time, very far away I believe, when the only need for human translators will be for translating fiction, we translators of today should strive to make the best possible use of the situation where MT, even if not required by the client, is a resource to be used, or not, as any (other) TM. It is here and will not go away even if some people would wish it to. Or put another way:

[NMT] represents a major breakthrough that [Langugage Service Providers] and their clients should actively investigate. Those that wait will find themselves at a disadvantage.[11]

References:

[1] Maybe a clue is to be found in this statement: “Post-editing should not be confused with pre-editing.” Although how such a confusion might arise I don’t understand. (From Trusted Translations, https://www.trustedtranslations.com/translation-services/post-editing.asp.)

[2] O’Brien, Sharon, Roturier, Johann, and de Almeida, Giselle (2009): Post-Editing MT Output. CNGL. http://www.mt-archive.info/MTS-2009-OBrien-ppt.pdf

[3] Hansen-Schirra, Silvia, Schaeffer, Moritz, and Nitke, Jean (2017). Post-editing: strategies, quality, efficiency. In Porsiel (ed.): Machine Translation. Berlin: BDÜ Fachverlag.

[4] Martindale, Marianna J. & Carpuat, Marine (2018): Fluency Over Adequacy: A Pilot Study in Measuring User Trust in Imperfect MT. https://arxiv.org/pdf/1802.06041.pdf

[5] Koponen, Maarit (2013): This translation is not too bad: An analysis of post-editor choices in a machine translation post-editing tas. In Proceedings of MT Summit XIV Workshop on Post-editing Technology and Practice. https://pdfs.semanticscholar.org/b659/ec47ebf3d05fe38ada7ed45d3afd54434d74.pdf

[6] See for instance Memsource’s ”Post-editing Analysis”, https://help.memsource.com/hc/en-us/articles/115003942912-Post-editing-Analysis

[7] O’Brien, Sharon (2016): Post-Editing and CAT. In: 2016 48 EST Newsletter. https://issuu.com/est.newsletter/docs/2016_48-est

[8] Hansen-Schirra, Schaeffer, Nitke, ibid.

[9] Grizzo, Sara (2017): Working as a post-editor: a field report. In Porsiel (ed.): Machine Translation. Berlin: BDÜ Fachverlag.

[10] Lommel, Burchardt and Macketanz (2018): Will neural technology drive MT into the mainstream? In MultiLingual, January 2018. http://dig.multilingual.com/2018-01/index.html?page=0

[11] Lommel, Burchardt and Macketanz, ibid.

 

Trados Studio apps/plugins for machine translation

For a long time now, I’ve been intrigued by the very large number of apps/plugins in the Studio appstore which give access – free or paid – to various types of machine translation services and facilities. Since I have lately found that the use of MT may give surprisingly good results at least for En > Sv (as well as completely useless ones), I was curious to know more about all these various options. Here is a brief overview of what I found trying to explore them to the best of my ability. (Because of the shitty layout – to be revised – of this site I cannot include the table on this page, but I trust you will be as well served by the separate page.)

This material is included in the Studio 2017 manual but for the most important entries there are much more in-depth descriptions there. However, this overview is updated more often than the manual.

Excellent book on machine translation

Review of Jörg Porsiel (ed.): Machine Translation. What language Professionals Need to Know. 260 pages. BDÜ Fachverlag 2017. €49. Order here (“Warenkorb” means “Shopping basket”).

If you are interested in machine translation (MT) and would like to know more – in fact a lot – about it without having to trawl the internet (and getting a lot of less interesting hits), you can hardly do better than reading this recently published book. It contains 22 contributions on every aspect of MT, from its development and various technical viewpoints (such as the roles of controlled language and of terminology and the integration of MT in CAT environments; also – and very interesting, too – data protection under the GDPR, that is the European Union’s General Data Protection Regulation) to quality management (a very interesting text on the amalgamation of the TAUS Dynamic Quality Framework and the German Multidimensional Quality Metrics into a common error typology) and, lastly, to a number of practical examples (the European Commission, Volkswagen, Microsoft, ZF, and Catalonia).

In addition, a sizeable portion of the book (54 pages) centres on so-called post-editing, that is, the “manual reworking of MT output”, as the book’s glossary phrases it. It covers the new standard on post-editing, ISO 18587; the education of post-editors; “strategies” for post-editing; and a “field report” by a translator who has worked as a post-editor for ten years.

What is particularly interesting here is that there are differing viewpoints on some essential matters. For one thing, both the field report and the text on education makes it quite clear that post-editing means getting a pre-translated (via MT) document for editing and making it ship-shape. This is not at all obvious in the other texts. In fact, some of them clearly includes interactive translation in a CAT tool with the aid of MT – this is the case with the text on the pricing of post-editing services, which is based only on such interactive translation (but it’s still very interesting).

For another, there are obviously differing views on the meaning of “full” post-editing. The ISO standard is quoted as specifying that such editing shall result in a text that “must be indistinguishable from a human translation”. But in other places we learn that “stylistic perfection is also not expected” nor is linguistic perfection, and that “post-editing is not the same as traditional translation, and that customers/clients do not want it to be [italics mine]” – the latter statement by the experienced post-editor, which makes it particularly notable.

The texts are generally easy to read, even if the occasional one can be very technical (the text on terminology is a case in point), and some of the practical examples contain historical material which may be less interesting to the general reader. But in all, this is a gold mine for anyone more than cursorily interested in this exciting field, and you are likely to return to it many times, which makes it worth its price. You will hardly find a more comprehensive – and up-to-date – overview of the MT world anywhere; that it is very much centered on the practical side of things makes it all the more useful. And I would say it is particularly valuable to the freelance translator who wants to know what the future is likely to hold (unless you mainly translate fiction, poetry and drama).

There is a brief presentation by the publisher here. And there are sample pages – 14 of them, including the contents list – in English here.

Searching for text in tags – and for text which happens to include tags

If you are searching for text in tags, you can use the extremely versatile app Integrated Search Views (which permits an enormous variety of filtering and handling options). You don’t need to do so, however, because the Advanced Display Filter (new in Studio 2017) will look for the text you are searching for also in tags (although it doesn’t tell you that explicitly).

However, there is a downside to this: As yet, you cannot turn this function off, which means that if you are searching for an expression in the text which happens to contain also tags (for instance formatting tags), then this filter will not find the expression because it will see the tags (which you did not include in your search string) as well.

But there is a simple remedy: the “basic” Display Filter does not work in this way. So if you don’t need any of the more advanced functions in the Advanced Display Filter but only are looking for plain text – use the basic filter function and you’ll be fine.

You can read more about this in detail here (if you have an SDL account).

Translating PDF format to PDF format

If sometimes you are stuck with a pdf file without recourse to the source, and the layout is complicated with perhaps images inserted in the text, various columns, and whatnot – then your solution is probably called Infix.

Infix is a program with two main functions: (1) It allows you to edit any pdf file. (2) It allows you to create an xliff file for translation in Studio (or other CAT tools);  the translation (in xliff format) can then be imported into Infix where a translated pdf can be created (and edited). Note that what you get is thus a pdf  while you can get the same text also in rtf format (see below), that document is completely without a layout and thus of very limited use, i.e. no better than what Studio can produce.

Here is what you do. (For background and a training video, you should also read & watch Paul Filkin’s blog post Handling PDFs… is there a best way?)

Download, registration and installation

First, go to the Infix home page (www.iceni.com) and take a look, for your information. Then select Infix PDF Editor and Try It For Free!, which will download the installation zip file. Install it.

After installation, you should also click Buy from €8.99 on the http://www.iceni.com/infix.htm page. When you do this, it does not mean you have to buy; you arrive at a page where you can (a) see the different purchase options, and also register for the free trial: scroll to somewhere around the middle of this page and click the Register button and go on from there.

Note: The difference between the free trial and the subscription is that with the former, you can edit not more than 50 final pdf pages in the Infix PDF Editor; after that it’s 50 cents per page. And it is unlikely that the resulting page(s) don’t require at least some editing. So do your calculations and make your choice.

The work process

1.      Open Infix.

2.      Open the pdf file you want to translate.

3.      Select Translate > Export as XLIFF.

This process requires you to log in (with the details you created/gave during registration) and also to select source and target languages as well as file name. Furthermore, your file organiser will open, so that you can check if the xlf file has been created. This is just for your information; close it if you like.

The exported document will be opened in Infix and you can see if it looks promising (with really complicated pages, you may see that you will still have some work to do after everything is done, but believe me, that’s nothing compared to all other ways of handling the same material).

Note: During this process, a box opens telling you that the document is being uploaded to TransPDF.com. That page is where everything is being done, and if you want to, you can follow the processes there, as in this image. As for how to open that page, see the end of this post.

Now you’re ready to translate:

4.      Open Studio.

5.      Select Translate Single Document, open the xml file you just created and select suitable TM(s) or machine translation or whatever. If appropriate, do a pre-translation batch task.

6.      When your translation is finalised (or when you just want to check how it looks), save it in xlf format (Shift+F12). Normally it’s OK to overwrite the original xlf file (you will be asked).

Note: All the following steps can be performed whether the translation is complete or only partly done.

7.      In Infix, select Translate > Import translated XLIFF.

8.      Browse to the xliff translation you just created and then select the Import button.

Preview or go to Final PDF?

When the import is done, you get to choose whether to view a preview of the result. You can do this as an intermediate step, or you can skip it and download the final pdf version. In both cases, you get to choose between Normal view, Compare Horizontal and Compare Vertical, the comparisons being between the source document and the preview/final version. The preview will be watermarked (“TransPDF.com”), but that will be removed in the final translation. Another difference is that the preview is read-only, whereas the final version can be edited (see below).

The preview also contains a starting page listing translation data as well as any font problems and their resolutions (such as “Futura-Bold -> Alegreya Sans Black) and instructions on how to deal with possible problems. If you skip this stage, you can still get this font report page from the Infix site – see below.

9.      Download the final pdf: Select Translate > Download Final PDF. Select the translated xml and, if necessary, rename it so that the result is kept separate from the previous translation xml file. As with the preview, you can select to open it alone or together with the original pdf. If you open for comparison but decide you only want to see/work with resulting pdf alone, just close the comparison and open the result (with Ctrl+O as usual).

Together with the opened file, you may get a window listing possible problems, such as this:

For a close look at a problem, select it and click View. You may experiment with the Text Fitting option, but you can also use the editing tools on top of the Infix window. Paul Filkin gives some instructive examples of such editing in his video, at 10:25. The editing is a bit tricky, but there is a comprehensive guide to be accessed via the program’s Help menu; also on-line tutorials.

You can also have the translation in rtf format (without any kind of page layout). For that, you need to go to your own TransPDF site: Go to the registration/sign-in page at http://www.iceni.com/transpdf.htm and sign in. This page opens:

As you see, you have here some of the options on the Translate menu, plus the option of downloading the final rtf, which sometimes might be useful for editing purposes. Also, once you have translated the file you can use this page instead of the Infix interface. One difference is that for the editing of the final pdf, you do need Infix.

The AppStore at SDL

The SDL OpenExchange is no more. In its place we now have the SDL AppStore. But what’s in a name? There are more important changes: the user interface has been completely re-vamped and is now very much more user-friendly. Here is a brief orientation which may help you to utilize it.

This is the start page:

However, this is mainly a showcase; as soon as you make a search or click a See all link, you will get more search options (see below). What, then, do the categories here stand for?

  • Apps of the Month are apps that SDL wants to promote at the moment, such as now apps, older but popular apps, apps that – for some reason – need more exposure.
  • Latest apps are just that (but they are called, on the other pages, Most recent). A “late” (or “recent”) app is an app that is either new, has newly been revised (with a new edition number), or has newly been revised even if the edition number is the same. A bit confusing.
  • Most popular Studio 2015 apps are the most downloaded Studio 2015 apps.
  • At the bottom of the page there are four Apps for terminology. It will change to other categories now and then.

So how do you search?

As soon as you use the search field (and, by the way, All to the left in the field means All products; click the arrow and you get the same options as under Product to the left on the page) or as soon as you click a See all link, you arrive at a page like this, with the apps listed 15 per page (unless of course your search results in fewer hits):

As you see, you can filter for product, pricing and categories (but note that the Language sub-menu, where you can select different categories, is not visible until you select Language).

The options on the green menu at right: Most downloaded and Most recent are the same as on the start page (with Most recent = Latest apps). Most reviewed is exactly that (although of course the reviews are not always positive; however, they sometimes contain valuable information, so it may pay to take a look). Top rated is… I don’t know. Not those with the best review grades, anyway.

So there you are: a brand new and much more user-friendly presentation. If you haven’t explored the SDL AppStore before, when it was called OpenExchange, don’t hesitate. A whole new world of useful additions to Studio is waiting for you there. And if you want more advice on how to make the most of this world, read Paul Filkin’s blog post Managing your SDL Plugins.

And as a consequence of the user-friendliness of the AppStore site, I see no reason to retain the OpenExchange overview any longer.

PhraseExpress

This text replaces the corresponding section in the manual; it has been removed in order to save space there but also because its “competitor”, AutoHotkey seems to be more popular. It is also easier to use; on the other hand, I think PhraseExpress offers a large number of useful functions well worth exploring.

Start at the PhraseExpress feature list and look round; then download and try it.
The application, when started, is found in the Taskbar’s system tray. Right-clicking it will produce this menu:

You open the PhraseExpress window by selecting Edit phrases:

This is where you manage your autotext entries, phrases, hotkeys, etc.; we’ll get back to that. To familiarise yourself with the Help is a good idea, and you can also do that without installing PhraseExpress: it is here.

Note 1: The help text often refers to the Settings option, which you will find on the Tools menu.

Note 2: The PhraseExpress functions do not work if you have this window open, so after any action performed in it: minimise it or close it.

Text replacement (with AutoText)

1. Select the phrase you want PhraseExpress to insert when you type its “abbreviation”.
2. Press Ctrl+Alt+C. This dialog box opens:

3. Enter a suitable Autotext abbreviation. (The Hotkey option is mainly intended for the execution of macros; see below.)
4. Press OK.

When you type the abbreviation and the selected delimiter, the entry in the Description field will be inserted instead.

AutoCorrect

There is no specific auto-correction function; just as in Word, any misspelled word listed as an “abbreviation” will be replaced by its corresponding (correctly spelled) Description. Of course, for this you need a list corresponding to the lists provided with Word, and you need to import it into PhraseExpress. Depending on language, there are two alternatives:

  • Use one of the lists offered by PhraseExpress: En, De, Nl, Fr, Es, Po, or It
  • Import your Word AutoCorrect entries

Import AutoCorrect entries provided by PE

  1. Open the PhraseExpress window (right-click the tray icon and select Edit phrases).
  2. In the Phrases and Folders pane, open the File menu and select Download additional contents. The PhraseExpress site opens with the Free PhraseExpress Add-Ons window.
  3. Click a suitable AutoCorrect file and save it.
  4. In the Phrases and Folders pane, select New folder.
  5. Open the File menu, select Import and then PhraseExpress Phrase File.
  6. Locate the file you just downloaded (a .pxp file) and open it. Answer Yes to the message window that opens (to avoid duplicate entries).

The result (for German) looks like this (the corresponding English material is already provided by default):

Import Word AutoCorrect entries

  1. Open the PhraseExpress window (right-click the tray icon and select Edit phrases).
  2. In the Phrases and Folders pane, select New folder.
  3. Open the File menu, select Import and then MS Word AutoCorrect entries. Answer Yes to the message window that opens (to avoid duplicate entries).
  4. A new folder, Imported MS Word AutoCorrect entries, is created, with the imported content.

Should it happen that the import consists of the English list instead of your target language, you need to extract the AutoCorrect entries for that language – see the instructions in the AutoHotkey section below.

Input correction entries with TypoLearn

When you make a manual correction of a typing error, PhraseExpress registers that as an AutoCorrect entry for future use. (It seems you have make the same correction three times for PhraseExpress to pick it up.) This applies to single word entries if you have ended them with a space character, then deleted that space with backspace, corrected the word and then again ended it with a space. Entries are stored in the Word Corrections folder.

Text suggestions (AutoComplete)

Here is a potentially very useful function: PhraseExpress can recognise words, phrases and spelling correction which have occurred repeatedly and stores them for use exactly in the way Studio uses AutoSuggest. It may be a good idea to take a look at the settings for this (Tools > Settings > AutoSuggest).

Import an external phrase file

You can import phrase files of your own (e.g. to provide text suggestions for AutoComplete). See the Help file, the section headed Importing an External Bitmap or Text File.

Enable/disable a phrase folder

Obviously, you can have phrase folders with contents in different languages. To avoid possibly confusing AutoCorrections etc., you can disable irrelevant folders: right-click the folder and select Enable Autotext/Hotkeys so that the checkmark disappears.

Clipboard manager

PhraseExpress has a “clipboard cache” function which saves a number of clipboard contents. By pressing Ctrl+Alt+V, you can select them in a popup menu (and by right-clicking a content you get further options).

Macros

There is an enormous amount of actions you can perform using the macro functions in PhraseExpress. Most of them may not be very useful in Studio, however.

 

Dependency file not found

When you open a partly translated file to continue translating it, you may encounter the error message “Dependency file not found” with the question “Would you like to browse for this [i.e. the original] file?”.

What to do:

If you have the original source file, the simplest solution is to answer Yes to the question in the error message and locate the source file. But if you are working on a project package, you will normally not have any source files included. Here are two ways to proceed:

  • Close the project in Studio. Go to the project’s TM file (where all your translations so far are stored) and re-name it (or if you want to be really safe, copy it to another location). Open the original .sdlproj file again (i.e. re-create the project from scratch). Then change the project settings to use your “old” TM instead of the newly created one, and run the batch task Pre-translate Files. (Whatever you do, do not just re-open the project package without safeguarding your TM, since the TM which is generated will overwrite the existing TM with the same name and you will have lost all your work.)
  • Another method in both cases (project package or not) is to skip the source file matter and answer No to the question in the error message. You can then continue translating as usual, but you cannot Save Target As, Finalize, Generate Target Translations or Preview. What you can do, however, is make sure that the TM you produce is complete; i.e. does not contain any unconfirmed or un-translated segments.

Once you have done this, you can start from scratch using the TM you have just produced. Or, in case of a project package, follow the procedure described above.

There are other solutions, mostly to do with restoring the dependency files or repairing the .sdlxliff files, but to me they seem unnecessary complicated and not completely reliable.

Why this happens:

According to Knowledge Base #3897 (see below), a dependency file is created “when the original file is too large to be embedded in the .sdlxliff file”, and a ‘dependency file’ is then created which contains a link to the original file. The dependency file is stored as a temporary (.temp) file. However, some computer tune-up/diagnostics software will delete all .temp files unless they are instructed not to (you need to find out for yourself how to do that). It could also happen that the Windows hibernation function is the cause, in which case that particular energy option needs to be disabled.

Furthermore, you can adjust the Studio settings which control the file size leading to the creation of dependency files. Go to Files > Options > File Types > SDLXLIFF – General and move the ruler under “Embedding” to its maximum (100 MB). Why is the default value 20 MB, and will this change have any negative effects? I don’t know. (Thanks to Walter Blaser for pointing to this solution.)

There are two entries in the SDL Knowledge Base dealing with this problem:

Article 3897 (for project packages), and

Article 4731 (for a corrupted .sdlxliff file)

The latter describes (under Resolution) how to recreate the .sdlxliff file, which could be a useful option. It is, however, not primarily intended for the case when the dependency file is lost but when the .sdlxliff file for some reason is corrupted.

Creating one .sdlxliff file for several virtually merged files

If you are working on at project with several files, and in particular if at least some of them are pretty small, and you did not merge them when the project was created – then it may be a nuisance that you cannot export them all into one file for review/proofreading.

However, the fact is that you can! A file necessary for this is created automatically as Studio auto-saves the files you are working on. This means that if you go to

C:\Users\<username>\AppData\Local\Temp

you will see that a .tmp file (with a non-meaningful name like “tmp123A”) is created with the interval set for auto-saving. (You will also see that there are a large number of those – and similar – temp files collected there, eventually taking up a lot of memory space. This is not a good thing, of course: normally, only the last of them is useful for this purpose. But don’t start deleting en masse – some of the temp files are necessary for you to be able to create target files.)

When the files are ready for review, just copy that last .temp file to a location of your choice, rename the file extension to .sdlxliff (and maybe give the file itself a meaningful name). Then you can use it as appropriate: send it to a colleague for reviewing it in Studio, or open it in Studio yourself for exporting for bilingual review. In the latter case, before you can perform any batch tasks at all, you need to (1) save the file, and (2) change its language (in the Files view) to the source language.

In all, this is excellent news, and you can read more about it in Paul Filkin’s blog post, Good bugs… bad bugs!, which includes a video tutorial. Thanks are also due to Yuji Yamamoto for discovering this in the first place.

Note: It has been known to happen (i.e. I and a few other people have noticed) that the AutoSave function suddenly does not work any more (but the temp function described above still does). You can check that by opening the AutoSave folder, located here:

c:\Users\[USERNAME]\Documents\Studio 2015\AutoSave\

and see whether it is updated appropriately. If not, it may help to deactivate the function and then activating it again (it is active by default). You will find it under File > Options > Editor; then look at the AutoSave header at the bottom in the righ-hand pane.

All this will of course be included in the the next edition of the manual.

Powered by WordPress | Designed by: backlink indexing | Thanks to Mens Wallets, warcraft gold and buy backlinks