Jump to content

REtransInternational

Members
  • Posts

    483
  • Joined

  • Last visited

  • Days Won

    12

REtransInternational last won the day on October 1 2023

REtransInternational had the most liked content!

Profile Information

  • Gender
    Not Telling

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

REtransInternational's Achievements

Fuwa Elite

Fuwa Elite (6/11)

41

Reputation

  1. Japanese always fixed-width, they will always do that when they use their own text rendering code because they don't really care any other language.
  2. Ten years ago - Tuesday September 17, 2013 - our Founding Mother, Aaeru, made her last post on Twitter, and was never heard from again. No one seems to really know what happened to her, and in fact whether she is even still alive, but let's not forget the one who started it all. Aaeru, if you're still lurking out there and come across this post: thank you for everything.
  3. The OP says 2017-2018 so neither of those are old enough. This query only has 31 results and might have it: https://vndb.org/v?q=&ch=&f=032gru74_0bZW280Rq&s=22w
  4. Thank you for the samples, they validate the decoding is successful. Yes, this format has separate alpha channel. Please find analysis report and sample code here: https://www.mediafire.com/?4sjk7crcsbaklc8
  5. Thank you for the additional information and sample. Does this look correct for an extraction of your sample image? If it is correct, please provide a few more samples, preferably of different dimensions, for validation.
  6. What does the distortion look like? A picture is worth at least 1k words.
  7. Line spacing maybe hardcoded in .exe , need to analyse it to find where to edit.
  8. Try it? If it is a simple concatenation you may be able to cut the XP3 archive off the end separately.
  9. Post hexdump first 40 bytes, that can show what the actual format is. Example of valid XP3: 00000000: 58 50 33 0D-0A 20 0A 1A-8B 67 01 17-00 00 00 00 00000010: 00 00 00 01-00 00 00 80-00 00 00 00-00 00 00 00 00000020: 0C A4 6A 4D-00 00 00 00
  10. In general the AI/ML systems require a lot of processing power, and cannot easily parameterised -- do you want to keep honourifics or transform them, set the gender of names, change definitions of words, etc. It requires whole new model retrained every time. The advantage is it is simple to implement and not dependent on language, only need set of matching phrases to train on and enough processing power - implementer do not even need to know either language! Output is based on training data so it can be very close. But it is not easy to modify the model after training to suit specific application. Hence entire specialised models needed e.g. DeepL, Google (general text) Sugoi (JP VNTL only), etc. Also the "hallucination" phenomenon can cause outputs look very correct when incorrect in absence of training data. Unfortunately syntax-parsing MT has dropped in popularity due to AI hype, although it is very easy to modify and parameterise to adapt to any application. However it requires one know both languages in order to implement and adjust the parsing/transformation rules, but once set up right, so it can give highly accurate 1:1 correspondence translation at high speed with very low processing power. In addition, when the algorithm fails to parse or find appropriate rule to apply, the output will become wrong or left untranslated in a very obvious way.
  11. You may try to ask, on https://www.watzatsong.com/ ,although they are focused on English, someone may know.
×
×
  • Create New...