![]() |
|
General Discussion Theres a Clannad of AIR-headed Kanon fodder being shot by the Little Busters After Tomoyo on a Planet-arian. |
![]() |
|
Thread Tools | Display Modes |
#1
|
|||
|
|||
![]()
I have this idea that I've been thinking for a while. Be warned, as it's a long rant, but I hope that some of you guys here, especially the ones who've worked with translations, can give reasonable replies. :D
There are hundreds of games from Japan that never make it to the U.S, some for good reason because they suck, but there are always those rare gems like certain games based on an anime and PC dating sims/visual novels that are great. Even if you did get a hold of these games and have a modded system, you couldn't really enjoy these games without understanding Japanese. And so after watching too much anime with closed captioning on TV, I came up with this idea to make these games more accessible to everyone else. The best way I can explain my idea is through this pic here. ![]() Video from any console is processed by a PC, where the software then processes the video and loads up a translated script of the game's text. Like closed-captioning, it will superimpose the translated text on the video at the right moment and output it to a TV or PC monitor. With the already existing anime fansub community, they could work on just making the translated scripts, without any need to learn how to hack into the game, like with existing translated SNES roms and PC dating sims and visual novels. There is also one example I've seen of something similar to this already available. The Quickcam Orbit, a webcam, has this neat feature that can recognize your face and superimpose a cartoon face instead, with the lips moving when you speak as well. You can see it in action here. http://www.youtube.com/watch?v=XXKupR1ibUk So, with the right equipment and skills, can this somehow work? |
#2
|
|||
|
|||
![]()
It's an interesting idea, but:
1. I'm not sure this 'visual-recognition-and-closed-caption-imposing' software exists at the moment. 2. If someone were to write one, it'd have to work at an acceptable speed (bearing in mind that in something text-heavy, you'd need to be doing a lot of recognising-and-captioning). You'd also have to teach the program to reliably respond to every conceivable situation where a line is presented. 3. It wouldn't work too well for something like a visual novel (in the old sense of the word), where the text is presented over the pictures themselves. |
#3
|
|||
|
|||
![]()
The computer programming skills required to achieve such a feat is significantly higher than just hacking the game normally.
|
#4
|
||||
|
||||
![]() Quote:
__________________
www.neechin.net @aginyan Narcissu 2 Eng #denpa@synirc.org Shares of bridge for sale: $590 a share. Funded by: "did you really say that just now?" |
#5
|
||||
|
||||
![]()
I would modify the method by possibly using overlays instead. It would take a hell of a lot longer having to make every text window in Photoshop or whatever but it would be the only feasible method of "fansubbing" videos of visual novels. =/ It'd be much easier just ripping the script and re-inserting into the game.
-Rai |
#6
|
|||
|
|||
![]() Quote:
|
#8
|
|||
|
|||
![]() Quote:
|
#9
|
||||
|
||||
![]()
I don't see why it shouldn't be taken seriously. There may be easier approaches in many cases, but that doesn't stop this being an interesting problem; there are non-eroge-related use cases for all the stages (subtitle recognition, subtitle removal), after all.
I can easily see a perfectly valid research project along these lines, though it would, as Talbain notes, probably have to be coupled with machine translation. :/ |
#10
|
|||
|
|||
![]()
Local fansub retranslators (extensive knowledge of Japanese is quite rare here, but people make do with second-degree translations corrected with limited Japanese knowledge - translating English fansubs into Russian and tweaking them to fit what they can hear and recognise to fit the translation better) use a program (I think it's available in English, but I'd be hard pressed to remember where I got it, poke me later about it if interested) which does faster than realtime OCR of subtitles in video stream. Quite a bit faster, actually. Unfortunately it does not work on anything other than video files, but I suppose it's very much possible to do this in real time to a video stream from an external source.
__________________
--- So you're like, nine hours fast? --- Yes, I live in the future. --- I doubt Russia is considered 'future'. --- Maybe not the rest of Russia, but I certainly do. |
#12
|
||||
|
||||
![]()
Of course, for the purpose of recognising text to be replaced, OCR per se is not necessary; all you need is a system that can correctly distinguish between different screens of text, and that can reliably map any given screen of text onto the correct substitute.
But for all I know that's just as hard as OCR. ^^; |
#13
|
|||
|
|||
![]()
It's really the same principle... rather than comparing a captured bitmap of a character to models for all known characters, you're comparing the bitmap of the entire text area to models of all known text areas. In a sense, the text area just becomes one big "character."
If you *did* do OCR individually on each character, it wouldn't need to be 100% accurate since you're not getting the text to translate it; a mostly correct OCR of the passage is typically enough to determine with good certainty which screen is being displayed, given that there are a reasonably finite number of text screens in the game. Once such a system recognized the current location in the game, though, it wouldn't even be necessary to keep trying to recognize each screen of text; it would be far easier to sense when the screen changed and simply display the next line from the script. It would still probably have to be customized so much for each game and each engine's unique behavior to make it any easier than a script hack, though. |
#14
|
|||
|
|||
![]() Quote:
...in fact, you'd only need to know when it was clear in a certain region, actually - to detect the moment when the game begins to display text but still did not display any. This would mean that it's time to display the next subtitle and keep it up until the window clears again. That's actually fairly easy, I suspect. It would come to sampling a certain region of the screen (right after the first kanji in the textbox) and determining the dominant color. If the dominant color is the font color, do nothing. If it isn't, and is close to the box background color, clear old subtitle and blit next subtitle.
__________________
--- So you're like, nine hours fast? --- Yes, I live in the future. --- I doubt Russia is considered 'future'. --- Maybe not the rest of Russia, but I certainly do. |
![]() |
Thread Tools | |
Display Modes | |
|
|
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
Getting started with "Aria The Origination ~Aoi Hoshi no Il Cielo~" | DarkMorford | Production & Help | 0 | 2014-07-07 15:45 |
Idea Factory posts "otome games in English" poll | dorkatlarge | General Discussion | 14 | 2012-09-10 16:23 |
New Project: "A plugin that leverages ITH to subtitle any VN" | Aaeru | General Discussion | 2 | 2012-04-20 21:15 |
Clannad "cutscene" translations | Kjoery | Technical Issues | 3 | 2008-07-25 13:03 |
Way to access "Sore Ja, Mata Ne" in windowed form? | Ganryuu | Technical Issues | 2 | 2006-05-17 06:29 |