Machine translation is uneditable. The reason for this is that machine translation often doesn’t have the context and emotive aspect of language which creates non-human sounding English. Therefore, your meaning will be lost.
Generally speaking, the performance of machine translation improves with increased input data. The best kind of input data is aligned corpora, where linguistics researchers have aligned thousands of sentence pairs in at least two languages, mapping word to word, verb ending to verb ending, etc. Large-scale commercial machine translation engines like Google Translate have a lot more input data, but they input it automatically, according to algorithms, and so this ‘alignment’ is less precise. Most importantly, machine translation engines don’t apply quality control to the finished product: even if you input one sentence, you rarely get a complete sentence as the output. How can you expect a complete paragraph to make sense, if the program can’t even get one sentence right?
There are also broad phrasing differences between academic prose: Chinese sentences have many, many short phrases concatenated with commas, while English sentences tend towards fewer, longer phrases and accordingly fewer commas. Another classic example: Japanese sentences often lack a subject, but English sentences grammatically require one. Translation machines don’t ‘see’ a subject in the input, and don’t try to create one in the output, whereas human translators can reconceptualize the whole sentence and input a suitable subject, using a word that might not even be in the Japanese sentence.
Nowadays, translation engines can be useful for single terms and phrases, but that’s because they’re essentially doing a matching game with words. At the sentence and paragraph level, meaning is formed by interplay between these words and concepts with context—i.e., what the reader already knows, and what the words and concepts imply without stating on the page—and current machine translation technology simply lacks the ability to process this highly advanced, human cognitive ability.
When talking about machine-translated English, we need to consider that where there are multiple meanings of a word, translation software often chooses an incorrect meaning. This could occur more than once in a single sentence, compounding the problem hugely. Translation software is also not taught discipline-specific usages and complex technical language (if you doubt this, try having some technical English you understand machine-translated into your language, and see whether you can make sense of it). Therefore, it is likely to be very difficult for an Editor to decipher what the intended meaning is, wherever the wrong meaning or sense of a word has been used. Translators used by Uni-edit are subject-matter experts, as well as skilled translators, so they can produce a translation that an Editor can understand.
Machine-translated English can often distort and thus miss the ‘intended meaning’ of the author, meaning editors won’t be able to do their job in a professional way.
In my experience, in most cases machine-translated text either does not make sense or makes your intended meaning unclear. I think Editors will not understand the machine-translated text or they misunderstand your intended meaning. Therefore, I don’t recommend authors submit machine-translated text.
At Uni-edit, we understand that asking Chinese-to-English translation is more expensive than English editing. Authors might try to submit machine-translated English for editing to save money. I personally have used Google translation to translate English to Chinese. Honestly, I couldn’t read the translated Chinese and couldn’t even figure out the rough meaning. I think it is not respectful to assume an editor has a part of their brain for deciphering unclear English. To make the paper ready for submitting to a journal when the author’s English is not good enough, it is best to find a professional company to help with translation, as well as English editing.