Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 04/26/23 in all areas

  1. In general the AI/ML systems require a lot of processing power, and cannot easily parameterised -- do you want to keep honourifics or transform them, set the gender of names, change definitions of words, etc. It requires whole new model retrained every time. The advantage is it is simple to implement and not dependent on language, only need set of matching phrases to train on and enough processing power - implementer do not even need to know either language! Output is based on training data so it can be very close. But it is not easy to modify the model after training to suit specific application. Hence entire specialised models needed e.g. DeepL, Google (general text) Sugoi (JP VNTL only), etc. Also the "hallucination" phenomenon can cause outputs look very correct when incorrect in absence of training data. Unfortunately syntax-parsing MT has dropped in popularity due to AI hype, although it is very easy to modify and parameterise to adapt to any application. However it requires one know both languages in order to implement and adjust the parsing/transformation rules, but once set up right, so it can give highly accurate 1:1 correspondence translation at high speed with very low processing power. In addition, when the algorithm fails to parse or find appropriate rule to apply, the output will become wrong or left untranslated in a very obvious way.
    1 point
  2. Yet another company escapes the evil grasp of social media giants.
    1 point
×
×
  • Create New...