-
-
Notifications
You must be signed in to change notification settings - Fork 488
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fantasy CPU: Driving the Display (poll vs wait vs interrupt) #1685
Comments
Technically it's possible that in the "fantasy" there is no BUFFER... you could build the 24-bit output (to a "fantasy" dumb LCD) on the fly based on VRAM and BITMASK alone... so perhaps it's safer to assume that's what the internals of the TIC-80 fantasy hardware look like... rather than assuming there is 100kb of hidden RAM to buffer the screen. :-) Though our fantasy GPU is going to have to have pretty fast bit-shifting if we pretend the bitmask is truly a bitmask. (instead of the very RAM heavy nibble mask we actually use in C) :-) |
I'll start: In "real life" (for simple retro graphics hardware) it seems more likely we'd have an interrupt driven Or should I stop talking about LCDs at all and go back to the days of CRT and real scanlines? If we're going to say there are discrete CPU and GPU we have to start thinking about it that way and that might mean there are some things the CPU truly can't do - such as change colors every border/scanline. That's probably tied up in very tight timing on the GPU side... asking and waiting on the CPU to answer might be far too slow. |
I assume also that none of this would be necessary to be used (even if provided) - given the assumption that VRAM is mapped directly to output. So much like someone writing a regular cartridge must only implement TIC, perhaps wrt to an actual CPU the minimum requirement is no callbacks/interrupts at all... since we could just assume "constant execution" rather than a scripting language than requires callbacks... IE, the following should produce output:
IE, address 0, push two pixels (assuming VRAM is still addressable at address 0)... I assume that this would draw two (color 15) pixels to the screen permanently (as the CPU is in a tight loop)... (this is obviously wasteful of CPU cycles, yes...) Or even if an interrupt WAS required it could probably just be ignored... draw the screen once, then sleep, interrupts briefly wake us then we go right back to sleep:
|
I think it's time to create a project to group all this hardware issues. Maybe I'm missing something but what's the problem with having exactly same callbacks but in form of interrupts? For what it's worth it would at least be familiar for any person who coded for TIC already and we won't have to invent whole new system and maintain documentation and separate code paths for it. Speaking of real retro consoles - they all seem to had at least VBLANK interrupt, sometimes HBLANK and others on top of it. So TIC's current pipeline doesn't seem to be much out of line. Maybe except BDR, but why not havin't just for compatibility sense. |
Well per the original request #1007 the idea was to get closer to "real". Just doing the same thing we do for scripting languages for a fantasy CPU doesn't seem "real" to me at all from what I know of such things. Bit-banging out an analog VGA signal (for example) is a VERY time sensitive process... you could indeed (depending on the speed of your CPU) play around with palette and such concerns per scanline, but you'd have to be very careful to keep everything in sync. Unless you were running quite fast the whole idea of:
And remember we'd have to do this TWICE... once for SCN, then again for BDR... roundtrips between the GPU/CPU... Now you could solve this by saying "it's all buffered, exact timing is less critical"... so that would presume our GPU has a 100kb BUFFER (or some significant portion thereof)... such that all this data is getting written to the BUFFER... and only when a full buffer is prepared would the GPU output that (aligned with VSYNC one would guess). But if the video card truly held such a buffer it'd be cool to let us at it more directly... I think it's more fun to pretend it doesn't - and that might force certain limitations on us. It could be for fantasy purposes none of this matters, but I think upfront it's worth thinking about all this and seeing if we can learn anything from it. Particular since no one has offered up a real working version yet. :-) I guess the hardware question about interrupts (if we care) is what is the interrupt latency... we can assume the CPU is soft-realtime... such that it has nothing better to do than process interrupts... so if the Do we want that type of realism? ;-) Or do we want the programmer free to ignore all such implementation concerns? :-) |
Or at least a tag... @nesbox You have anything against a project? |
@joshgoebel from what I know it works like what you described in retro consoles. There is a single buffer in GPU, GPU constantly draws it on a screen while producing V/H blank interrupts during which CPU is free to alter that buffer (via direct writes or GPU commands, doesn't matter). Since outputting VGA signal is realtime process there is a limited (but known beforehand) count of cycles to alter the buffer in interrupt handler. If interrupt handler takes too much cycles then there is either garbage on screen or, I'm not sure here, GPU just ignores that commands. |
I don't have anything against it, pls create a project if you need, for now, I’m just watching where this road will lead us :) |
Related: #1678 #1660 #1007
I wanted to open a thread just on driving the display "hardware". We've talked elsewhere about how a fantasy CPU might actually draw to the screen:
Writing to VRAM would be no different than how scripting languages do it. Accessing the GPU directly would be some of memory mapped registers perhaps paired specifically with features of the given fantasy (or real) CPU in question. Perhaps you first write to a few registers in RAM than send a "run_command" signal to the GPU over an IO port, etc... In this thread I'd like to talk about the whole process in a bit more detail but especially in relation to all the current complexity of the graphics "pipeline". For example our current drawing callback:
This is all done via callbacks... There is some talk that BDR and SCN is currently a bit of duplication, but it's how things currently stand. ...so scripting environs now get:
282 callbacks per frame... Would these all be interrupts? None of the retro hardware I've worked on used interrupts for screen drawing (Gamebuino, Gamebuino META, Arduboy, etc)... the hardware software was all rather simple:
I have no familiarity with how early retro systems worked (NES, Gameboy, etc)... perhaps someone else can weigh in there.
So how might we be thinking about handing from from a fantasy CPU perspective? Is our CPU mostly idle, but it supports software interrupts for TIC, SCN, OVR, BDR? Or would we instead have only a timer interrupt and the CPU would be responsible for figuring out these other things? Or perhaps we have only a single TIC/VSYNC interrupt that everything keys off of.
I feel like the complexity of the callbacks might force our hand here... for example currently OVR is a very specific "hardware mode" that allows writing to VRAM but via a bit-mask - ie keeping track of which pixels are written and which aren't such that during the "signal rendering" what you have is:
Right now there is no way we could do this with just a VSYNC interrupt without access to the "magic" BUFFER or BITMASK... since we'd have no way to do the color mapping needed from the palette, etc... and just allowing us directly access to the BUFFER as if it were part of hardware would allow us to write 24-bit color games. Meanwhile driving all this nuance from the CPU side seems weird as well - meaning the CPU saying "GPU, go into OVR now." or "GPU, prepare for scanline 5 now'...
Maybe this stuff just doesn't translate perfectly when you start thinking about real hardware... the Gamebuino META (in it's 16 color high-res mode) is very close to TIC-80 in some ways... you had a 16 color palette... the CPU was wired to the LCD controller in RGB565 mode (16-bit color)... so whenever it was time to "paint" the screen whatever 4-bit screen data was in RAM was translated (on the fly) to 16-bit color from a 24-bit color space... we didn't support BDR or SCN type palette swaps per scanline, but we easily could have. (esp since there was no VSYNC and the timing wasn't critical)
But, I'm rambling... so any thoughts on how this whole graphics pipeline might translate over to a fantasy CPU?
The text was updated successfully, but these errors were encountered: