| ADVERTISEMENT |
The Words You Want. Anywhere, Anytime
Let WordFinder open a new world of opportunities -- get access to millions of words and translations from the best dictionaries, on your computer, via a web browser, on your smartphone or tablet. Stuffed with lots of smart features. WordFinder has what you need as a translator in your everyday work -- anywhere, anytime!
Read more at www.wordfinder.com.
|
|
1. crossPollination
| |
In previous newsletters I've talked about Across and its awakening to the translator community. It's not that Across isn't being used by translators -- but very often it's used as a tool to fulfill client requirements rather than as a technology of choice. This is despite the fact that Across for freelancers was free all along. So they thought, let's make it paid and see how that works.
Sound weird? A little maybe, but it might just work for them.
Here is how this all came about: Years ago, Across formed advisory boards composed of users of their technology. Naturally they started with translation buyers ("Corporate Advisory Board") since they really make up the core of Across' business. Later they started an LSP Advisory Board (2013), and last year (2014) they established a Translators' Advisory Board. These boards meet twice a year and -- according to Christian Weih of Across-- they also include users who are rather critical of the software, at least in the case of the translators' board.
The version of Across that has just been released (6.3) is the first to show some of the results from those translators' voices, and not surprisingly, much has to do with some level of exchangeability. Across has always had a different strategy than their competitors. Rather than supporting exchange standards like TMX, TBX, or XLIFF, or allowing translators to associate their own resources to projects (such as TMs or termbases), Across developers believed in a closed loop economy. As Christian expressed it, "We always liked Apple's approach." And so did their corporate clients.
These are the new features in the new version that deal with sharing resources (if you've never used Across, you might not find this too overwhelming, but that's likely different for the experienced Across user):
- It is now possible to attach your own translation memory and termbase to each project that you receive from your client (unless your client has disabled that feature). You can both read and write to those resources, and the client will not see how much you have benefitted from your own materials.
- Through the so-called crossConnect module, it's also possible to output projects as the translation file exchange format XLIFF so they can be processed in other tools -- but this comes with a twist. Each XLIFF file is encrypted and can be opened only by the third-party application it is intended for by the person who sets up the project (for example, QA apps such as Xbench or QA Distiller or machine translation applications). While it's theoretically possible that these files could be processed in competing translation environment tools such as SDL Trados or OmegaT, the makers of those tools would first have to develop an interface to be able to read the encrypted file (an API to do that is provided by Across). Even if they do that, however, it's not likely that your client will allow for that, especially since it's disabled by default and comes with variable levels of editability (machine translation suggestions only, commenting, and/or full edits). In summary, the project exchange via XLIFF is so limited at this point that it's less an exchange than a possibility to add some features that Across itself doesn't offer.
What else is new? A number of smaller but helpful items (including better filtering, more flexibility with inline codes, better sorting, extended use of delimiters, ID-based segments, and JSON file support) plus the addition of a PDF filter. I wasn't able to figure out what the underlying engine is, and the couple of tests I ran were, well, not particularly great -- but then hardly any PDF filter can claim to be "particularly great." I did notice that Across had more problems with line endings than competing products, though.
The PDF filter is interesting in a couple of ways, though: you can choose between more text-oriented processing and more format-oriented processing and you can attach the original PDF for reference purposes.
Other non-translator-specific new features in this version include the introduction of batch processes in the termbase and the unveiling of crossTerm Now, which is a browser-based dictionary-like display of otherwise complex termbase content.
But let's come back to the whole issue of this being a paid version for freelance translators. First of all, it's not completely true. There is still a "Basic" version that's free. But that version neither allows you to use the newly introduced usage of your own TMs and termbases nor can you export documents from it. This means it's an acceptable version to use if you a) never plan to use it for your own projects and b) are either not interested in using you own data or you know that your clients disable that feature, anyway.
All other Across users will have to pay. And mind you, it's not actually a direct payment for the software. You have to become a Premium member of the newly launched marketplace crossMarket to have access to the Premium version. The price point for translators is presently around 18 euros per month or 9 euros if you pay for a whole year.
Every individual Across user now has to register in that marketplace -- either for free with a very limited number of fields to choose from and access to the Basic Across edition, or as a Premium member with much better possibilities to market yourself and your services and access to the Premium version of Across.
Across gave me free access to a Premium subscription so I was able to look through the portal a bit. The portal houses not only individual translators but also LSPs and corporate clients. Like any marketplace, the idea is that the different parties can partner up here without any additional payment requirements. Christian mentioned in particular that a good number of the corporate clients are looking to work with individual translators directly, so this might not be the worst place to be represented if you feel comfortable working with Across.
As far as I could tell, there are presently about 600 translators enrolled with a visible profile (you can select to have a hidden profile), and 60-some of those have a Premium account -- but these numbers are likely to expand quickly since the portal was just released a couple of weeks ago.
I've often written about marketplaces of technology vendors in the Tool Box Journal and have typically not been particularly positive about their chances for success. Oddly enough, in this case it might just work (whatever "work" means). The closed loop outlook that Across has fostered over its existence might be one of the reasons for that. After all, why would I go anywhere else as a translation buyer or LSP looking for a translator but to the place where they're all registered. Sure, for many of us, Across' lack of openness has been -- and still is -- frustrating, but in the end it's a business decision and one that might work in Across' favor in this particular instance. Plus, you can't say that Across hasn't been open about their lack of openness, and many of their corporate clients like them for that very reason.
I'm also interested in seeing whether there will be an uptake in the adoption of Across as a standalone tool now that you have to pay for it. Sometimes it's exactly the existence of a price tag that makes people (read: us) more prone to use it.
We'll see.
|
| ADVERTISEMENT | |
MindReader for Outlook
MindReader for Outlook makes your communication more efficient by suggesting text from previously sent e-mails. Compose your e-mails more quickly, more accurately, and more consistently!
Stop spending time deciding how to communicate your message and concentrate on the content of the message instead. This heightened efficiency provides you with extra time to manage important work, freeing you from the minutia of your daily routine.
MindReader for Outlook supports Microsoft Outlook 2010 and 2013 and is available as a single-user license or site license.
Get your free trial license at STAR Group webshop.
www.star-group.net
|
|
2. iLangL (Premium Edition)
| |
The good news about "iLangL" -- it's just the company name! Its owner Sergey Yuryev fortunately decided not to use that unpronounceable monstrosity for his main product, which he called -- decidedly blandly -- "Generic Content Provider." Phew!
I asked Memsource's CEO about Sergey and his product (you'll see why in a second), and he was rather complimentary about his skills. Then he said, "I appreciate Sergey's enthusiasm -- it connects well with our passion for Memsource." I have to agree: Sergey does indeed seem to be enthusiastic and also seems to know what he is doing.
So what in fact does he do and offer?
As far as I can tell, Generic Content Provider offers a different and new way to provide a relatively seamless extraction of data from content management systems, transform them into easily translatable files, and then, once translated, write the data back into the CMS in a parallel structure to the source language.
You all probably know what a CMS is. If not, it's a system that stores and allows for access to content that can be published in a number of ways, especially (in our context) on websites. In all likelihood, it's no longer enough to be able to translate HTML pages when translating a website (unless it's as lame a website as my own), but you have to develop different strategies.
These can include proxy-based translation solutions like those offered by Easyling, Smartling, or dozens (at least so it seems) of larger LSPs. In these cases, the CMS is never actually touched, and all the translated content sits on a different server, often owned by the technology provider (that's why so many LSPs like this solution).
Or you can have solutions like Lionbridge-owned ClayTablet, Lingotek, or Beebox that essentially sit within the client-owned CMS and serve the translatable content in a translation-friendly format (typically XLIFF) or read it right into a translation management system.
Or you can have a solution like Sergey developed that is generic (therefore "Generic Content Provider") in the sense that it's not installed within the CMS but sits on a server (either your own server or in the cloud) and you gain access to the different CMS's in whatever way they provide for. The CMS's that are already supported (Adobe Experience Manager, DNN, EpiServer, SiteCore, WordPress, Umbraco, and Drupal) are accessed by either a REST API (Adobe EM), MS SQL database connections (DNN, EpiServer), SOAP (SiteCore), or a MySQL/WPML connection (WordPress). For each of these solutions there is a user interface that allows you to decide which content should be translated and in what manner. While the user interfaces are relatively easy to navigate, the setup is system-specific and not for the faint of heart. Unless you really know what you're doing (and I have to assume you must if you're still reading this), you might be well-advised to use Sergey's consulting services to help you with the setup. (The same consulting services are also available if you need a connector to a CMS that might not be listed above.)
What's interesting and kind of strange is the file format in which the data emerges from the CMS. Rather than extracting it in the translation file exchange format XLIFF, it comes out in a paragraph-segmented XML format. Sergey's reasoning for this is that he wants to leave the segmentation to the translation environment being used for the actual translation rather than already make those decisions via the prepared XLIFF.
This is where David from Memsource comes into play. There is currently an existing immediate interface connection with Memsource so you can output the XML files (once you configure Memsource's XML filter accordingly) right within Memsource. Both iLangL (geesh!) and Memsource seem to be genuinely excited about this partnership and have ongoing plans for a deeper integration.
So far the solution is really geared toward LSPs (iLangL's flagship customer is the Danish TextMinded). While this is an avenue that Sergey hopes to continue to build on, he is planning for a number of features next year, including more automation and notification (presently the project manager has to push new content out manually) that will be interesting for corporate customers as well.
Pricing is very transparent and is done per channel (= CMS connection). One word to the wise: If the writing on the website could be approved a little, it all would look a tad more professional. (How's that for a job for someone among us: offer Sergey some copy-editing services in the new year!)
|
3. The Tech-Savvy Interpreter: First Look: The Interprefy Remote Interpretation Platform (Column by Barry Slaughter Olsen)
| |
Last month I introduced you to WebRTC, the new technology "baked in" to the Google Chrome and Mozilla Firefox browsers that makes in-browser audio, video, text and file sharing possible without installing any plug-ins. In an effort to show you how WebRTC is enabling remote interpreting over the web, I thought it might be fun to take a look at a WebRTC-based remote interpreting platform. So, I reached out to Kim Ludvigsen, CEO of the Zurich-based startup Interprefy and asked for a look-see at their remote interpreting platform, which runs on WebRTC. He readily obliged.
Some Background
Like many startups that are looking to disrupt the interpreting space, Interprefy is a newcomer. Their idea was born out of a dissatisfaction with the way conference interpreting services were provided in meetings the company founders attended themselves. You can read more about their history here.
It is important to understand that "remote interpretation" is an umbrella term that covers a broad range of interpreting modalities and service delivery models. Is it consecutive or simultaneous? Are the participants in the same room or are they remote as well? Will the interaction be just a few minutes long or will it last all day? Can the interpreters see the participants? How many languages will there be? Etc. The list of questions needed to define a specific use case is long indeed. So, whenever looking at a remote interpreting platform, it is important to understand its use case, which is to say where and how it will be used.
That said, Interprefy has chosen to tackle one of the most difficult and controversial remote interpreting use cases -- remote simultaneous interpretation for meetings where the participants are all physically in the same room. Only the interpreters are connected remotely through technology. Think of it as meeting delegates seated around the same table with the interpreters seated across town or halfway around the world and connected to the meeting over the Internet.
For many conference interpreters this is a doomsday scenario. I disagree. While this technology does have the potential to displace some on-site conference interpreting work, it is not going to change 60+ years of professional practice overnight and probably not ever. To the contrary, it has the potential to create more new remote work than the traditional on-site conference interpreting work it displaces.
So How Does It Work?
Interprefy uses WebRTC technology to transmit both audio and video over the Internet to allow interpreters to receive source audio and video and interpret into the target language for attendees who are physically present in the same meeting where the speaker is. All of this is done over the Internet. It goes without saying that there has to be fast, dependable broadband Internet available at the meeting venue and the offices where the interpreters will be working.
What Do the Interpreters Need to Connect?
Interpreters connect to the Interprefy platform using either the Google Chrome or the Mozilla Firefox web browser on a PC or Mac. They need to have a quality headset and a webcam (See my previous post on choosing a USB headset here). They must have a wired broadband connection to the Internet (in technical terms, according to Interprefy, that means a minimum of 2 mbps down, 2 mbps up, and a ping of under 50 milliseconds). As is the case when working remotely on any platform, interpreters must be in a quiet space where they can work uninterrupted.
What's the Interpreter Interface Like?
There are many elements of the interpreter interface. The most prominent feature is the video of the speaker. It also includes a chat box that allows the interpreter to communicate with the organizer on site at the venue and another to chat with a virtual booth mate. The mute button is used to turn the microphone on and off. The interface also has the potential to provide a video feed of one's booth mate as well.
One feature I found very helpful was the inclusion of a two-way video and audio link between interpreter and speaker before the event begins -- a kind of "virtual green room" that makes pre-speech briefings for the interpreters a reality.
How Do Delegates Listen to the Interpretation?
Delegates can use any Android or iOS smartphone with earbuds or headphones. They have to download the Interprefy Connect app, which is simple to do. Each conference or meeting is given a unique code (Interprefy calls it a "token"). Participants enter the meeting token and they are then connected to the alternate language channel (i.e. the language the interpreters are working into). The smartphone becomes a simple audio receiver. I was able to demo this myself on my iPhone 6. Connecting to the audio stream was fast and simple -- type a six-digit code and you are in. The audio was clear and there was very little latency.
The folks at Interprefy have ambitiously designed their platform for a wide range of use cases-for large presentations, seminars, workshops and smaller meetings. It its current form, based on what I saw, it seems best suited for large presentations where the communication is one to many. Meetings with lots of dialog going back and forth, like negotiations or discussions entail additional complexity, and I haven't seen Interprefy in that kind of use case yet so I'm not in a position to make an assessment yet.
Undoubtedly, the ability to use a smartphone or other smart device as a receiver for interpretation is compelling. For meeting organizers, this means a cost savings on equipment while still providing interpreting services for attendees. However, it does shift the burden of setup to the participant, and as many technicians have pointed out to me, the issue of smartphone battery life limits how long a person can and may be willing to use a smartphone for listening to interpretation. <
I raised this concern to Interprefy. They explained that for longer meetings with interpretation they supply 2500 mAh power packs that have a battery lifetime double of average smartphones enabling their use for full-day meetings with interpretation. The power packs work with both Android and Apple devices. They also have multi-device charging stations available for longer meetings as well.
Overall Assessment and Reflections
End User Experience: The Interprefy model seems simple and easy for meeting participants to use. If you have earbuds (Interprefy can also supply these at the meeting venue for participants who didn't bring their own) and a smartphone or tablet, you can listen to the interpretation. That simplicity is powerful but it also raises the question: How willing will a meeting participant be to use his/her own smartphone to listen to the interpretation?
I expect this kind of interpretation setup to be used initially for meetings that last a few hours, not an entire day or longer. This technology will also make it possible to provide interpretation to a larger group of people than may have been feasible in the past. Think of it this way. If someone is giving a speech in a large auditorium or a stadium with tens of thousands of attendees, hundreds or thousands could listen to the interpretation with an app by simply using their smartphones and earbuds. No logistical headache of having to distribute and then collect and clean headphones and receivers.
It is important to note that simultaneous interpretation technicians will still need to be on site to ensure the technology is working and to resolve any problems that may emerge during a remotely interpreted meeting.
Interpreter Experience: The interpreter experience on Interprefy is headed in the right direction but still needs refinement. For example, currently the meeting room video and the interpreter controls have to be opened in separate windows that can end up buried under other programs open on the desktop. Additionally, it still takes several clicks and the introduction of passwords, tokens and codes to get the system configured to interpret. Volume control and microphone selection can also require several steps. These are failings that I have noted in multiple remote interpreting platforms. While some tech-savvy interpreters are able to get through these processes easily, many others struggle.
The good news is the engineers at Interprefy are working to integrate all the necessary features in a single interface that will allow the remote simultaneous interpreter to connect quickly and painlessly. I've seen a screenshot of the new interface and I think they are headed in the right direction. Even so, as is the case with any interpreter console, physical or virtual, interpreters need time to get used to its features so they can use them under the stress of simultaneous interpretation. Adapting to online work takes time and practice.
The long and short of it is that the Interprefy platform is up, running and commercially available. They have been providing remote interpretation for meetings since early 2015. The platform makes remote simultaneous interpretation of face-to-face meetings and conferences a reality without having to spend significant amounts of money on equipment, transportation and lodging for interpreters. This means that there is a potential for significant change in the conference interpretation market. Interprefy is still a little rough around the edges, but the great thing about technology hosted in the cloud is that improvements can be made on an ongoing basis, which means an already functional platform is only going to get better.
Do you have a question about a specific technology? Or would you like to learn more about a specific interpreting platform, interpreter console or supporting technology? Send us an email at inquiry@interpretamerica.com.
|
4. Rapid Developments
| |
In the last newsletter I wrote about the new translation environment tool Lilt and described some of its rather groundbreaking features, including the lack of distinction between TM and MT, the termbase and concordance search using the same material as the MT, the MT suggestions automatically and interactively refreshing themselves after every word you enter, every finalized segment automatically considered for new MT (and "TM") suggestions, and no inline tags because even they are taken care of by MT processing. (Time to breathe.)
In the few days between the last newsletter and this, Lilt's development team has continued to work feverishly to add to the existing feature set.
For their concordance ranking they are now using something called "paraphrastic sentence embeddings." This relatively new technique comes from distributional semantics and allows the tool to represent semantic relationships between words as vector differences to figure out context.
Don't get it? Me neither. The point is, though, that this allows the tool to evaluate the context of a word or phrase to predict its likelihood of being a good match in a concordance search. Unlike other tools, Lilt uses no metadata in its MT/TM/concordance/termbase data monster (that's a technical term!), so it has to use other ways to determine the likelihood of concordance matches.
If you really have nothing better to do over the holidays, you can read this paper that deals with this. (But, boy, your holidays must be a real drag if you think that's fun to do when you could also go out for a walk, go ice-skating, play with your or others' kids, kiss your darling, eat some more, sleep some more . . . you get the point.)
Also, they implemented a new Unicode standard for text segmentation, which unfortunately cannot yet be represented as SRX -- the segment exchange standard.
And lastly (aside from a number of here-and-there kinds of changes, like better support for German compounds, a new website, and auto-propagation), it's now possible to do a concordance search for a whole phrase rather than just a single word by simply highlighting the phrase.
The immediate road-map for new languages, by the way, looks like this: We already have EN<>ES, EN>FR, and EN>DE, and we'll have FR>EN and EN<>PT in January.
Happy New Year!
|
| ADVERTISEMENT | |
Now even easier: Across Language Server v6.3
Across v6.3 enables users to integrate third-party systems in their Across workflows in a controlled manner. The translation environment also boasts a number of new features. For example, this includes filter and sorting functions as well as support for PDF and JSON files. The release of the new Across version was accompanied by the go-live of the new network for all Across users. crossMarket brings together translation service providers and customers. For freelance translators, the crossMarket membership also includes the basic or premium variant of the Across Translator Edition, depending on their membership type.
Would you like to learn more about version 6.3? Check out the new features.
|
5. Virtually Speaking (Premium Edition)
| |
The feature of "libraries," though already introduced in Windows 7, is a concept that is still not well understood by many.
Libraries are like virtual folders. Of course, this doesn't mean that normal computer folders are physical, but in the traditional computer world, they and only they "contain" the files that are stored in them. Libraries, on the other hand, are virtualizations of that. They can display the contents of other folders from all over your computer, other computers on the network, or even an external hard drive or a flash drive. A library is essentially an organizational principle that monitors other folders and provides a single "location" to work with all their contents.
Out of the box, Windows comes with four libraries: Documents, Music, Pictures, and Videos, each with its obvious content. (Windows 10 has added some odd and unimportant ones such as "Camera Roll" and "Saved Pictures" that are connected to the cloud-based Windows OneDrive.) And again, while the references to those files are stored in the respective libraries, the actual files stay wherever you stored them on your computer.
There are plenty of ways you can use libraries to manage, manipulate, or organize files, but one is particularly helpful: backup.
Though Windows 7 offered a good and painless way of backing up your whole computer, Windows 8/8.1 introduced a whole new backup system called "File History." File History allows you to backup your most important files and later restore them by selecting them and choosing History on the File Explorer ribbon, or by right-clicking any file and selecting Properties> Previous Versions.
Windows 10 now has tried to make everyone happy and is offering both kinds of backup: the complete hard-drive or just the most important files. For my own needs I have stayed with the latter, the File History process.
The reason why it's important to mention File History in the context of libraries is because it's exactly and only the content in the libraries that is being backed up during the regularly scheduled backup processes (which you can control and enable under File History on the Control Panel).
So if you're using the File History process and need to backup files beyond what's automatically located in your pre-configured libraries (Documents, Music, Pictures, and Videos), you will have to either add links to the existing libraries or create a new library
For instance, I found it very helpful to add all of the data under C:\Users\<user>\AppData into a newly created library (which I called BackUp) so this is backed up as well. The AppData folder contains a great number of settings files for many programs as well as actual working files for some.
To create a new library, simply right-click on the Libraries folder on the left-hand side of File Explorer (the Navigation bar) and select New> Library. Once you give your library a name and open it, you'll be prompted to add folders from any location you can access from your computer (except read-only media such as DVDs or CDs). Since you don't want your complete system to be backed up every night, you can pick and choose the necessary folders and then schedule the library for your nightly backup.
Stay safe!
|
6. New Password for the Tool Box Archive
| |
As a subscriber to the Premium version of this journal you have access to an archive of Premium journals going back to 2007.
You can access the archive right here. This month the user name is toolbox and the password is frostylawns.
New user names and passwords will be announced in future journals.
|
| The Last Word on the Tool Box Journal | |
If you would like to promote this journal by placing a link on your website, I will in turn mention your website in a future edition of the Tool Box Journal. Just paste the code you find here into the HTML code of your webpage, and the little icon that is displayed on that page with a link to my website will be displayed.
Here is a website that mentioned the Tool Box last month:
www.tradiling.net
If you are subscribed to this journal with more than one email address, it would be great if you could unsubscribe redundant addresses through the links Constant Contact offers below.
Should you be interested in reprinting one of the articles in this journal for promotional purposes, please contact me for information about pricing.
© 2015 International Writers' Group
|
|