-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimize OpenGL Drawing #90
Comments
it's been over 5 years since the last release of Opengl, there probably shouldn't be any reason to target anything but the latest. But since the writing is on the wall, perhaps some thought should be put into how to abstract over both opengl and vulkan. Though I'm thinking that might need somebody familiar with vulkan. |
Regarding the need to keep our eyes on the next graphics API as @JMC-design was talking about: Piet-gpu is a good project to follow. They have a 2d/font focus, but they are pushing the envelope for doing as much of the compute for a UI on the GPU: |
Might also want to set a bar for minimum gpu memory, I guess that's something that needs to be tracked, such a weird concept. |
yet another opengl abstraction for lisp |
I am very keen to maximize use of the GPU as well as SIMD and multiple cores. I really want our system to be able to handle production-level datasets with the same (or better) speed as commercial packages. How we architect this (improved OpenGL interface, Vulkan, compute on GPU) is something we should discuss. If we do have a Vulkan enthusiast, a first step could be to implement the equivalent of the code in Also, one of my goals is to develop a cross-platform GUI toolkit. Currently we're building it on OpenGL, using the text engine by @awolven and font rasterizer by @JMC-design . |
So I've just drawn my first triangle using vertex arrays and here are some of my initial thoughts. Writing glsl in a string in a lisp buffer is a nightmare of formatting. In the long run it doesn't matter what a person uses to get a string for a shader program, but maybe there should be some default shader dsl, or formatting to make code and examples easier to read? It'd seems like it might be nice to encapsulate these buffers into structs that can be passed around easily, then you have to build a bunch of functions to use those structs, and then years later you have cepl... or something similar. I wonder if anybody has made a comparison of the different layers on top of gl? I'm not even sure if sbcl system pointers work the same way on windows or osx. So maybe packing directly into foreigns is required? And definitely so if any plans to support another implementation. |
You could look at the text engine to see how vertex arrays are used there.
There seems to, however, at least on linux, been a change in the version of
opengl used, rendering the text engine useless. I'm trying to fix it, but
it would be nice to know if there is going to be a version change before I
spend a lot of time targeting a specific version.
…On Wed, Sep 7, 2022 at 8:27 AM Johannes Martinez Calzada < ***@***.***> wrote:
So I've just drawn my first triangle using vertex arrays and here are some
of my initial thoughts.
I'm assuming we'd like to fill buffers by just sending a list of points?
What I've done for a test is just fill up a cl array, grab the vector-sap,
and use that to fill buffers. With points we have to pack them. Do we pack
into a cl array, pin and use, or just pack directly into a foreign array,
and then free or keep the array around?
Does any packing we do into cl arrays have any effect on packing into simd
packs?
Writing glsl in a string in a lisp buffer is a nightmare of formatting. In
the long run it doesn't matter what a person uses to get a string for a
shader program, but maybe there should be some default shader dsl, or
formatting to make code and examples easier to read?
It'd seems like it might be nice to encapsulate these buffers into structs
that can be passed around easily, then you have to build a bunch of
functions to use those structs, and then years later you have cepl... or
something similar. I wonder if anybody has made a comparison of the
different layers on top of gl?
I'm not even sure if sbcl system pointers work the same way on windows or
osx. So maybe packing directly into foreigns is required? And definitely so
if any plans to support another implementation.
If anybody is interested this is the code I used to test.
https://plaster.tymoon.eu/view/3408#3408 , just replace the
surface:update with whatever your window needs to swap buffers.
—
Reply to this email directly, view it on GitHub
<#90 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AABGMMITZTTX64K4Z4MENP3V5CJ3BANCNFSM6AAAAAAQFI53LM>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
I tried, but it reads like c and I don't see any lispy abstraction. The only thing I see is direct writing of individual bytes to foreign memory. |
These are good questions, and there are a lot of moving parts on how we encode geometry: ease of editing in CL, optimized OpenGL display, for SIMD, for threads. One possibility I have been mulling over is whether we should keep a low-level C representation which can act like an old school display list for our geometry classes. We would need to sync up the CL point arrays with these C-type vectors after modeling operations, which would be optimized for OpenGL and such. Or we could have C-level structs for internal geometry, which we access and modify from GL. That might make CL editing a bit slower, but could result in faster rendering. |
Does it include distributed computing as a goal? :-) |
Down the road, why not? :) |
Because would be a 30MB SBCL runtime per node? I really wish there was something like MirageOS (which uses OCaml) for Common Lisp or Scheme. |
I'm a bit full from their 130 page slide deck on optimization. Looks like OpenGL 4.2+ only, which caused a stomach rumble. Sometimes I wonder, "Why can't we just implement OpenGL in pure Common Lisp and be done with it?" |
I think the approach is still interesting. Today I'm going to try and test if it makes any difference packing arrays from different types of points, into cl arrays that are pinned and sent, as well as foreign arrays and sent. |
I agree, especially given the potential performance improvement. (I don't like vinegar on my salad, but wouldn't suggest other people shouldn't enjoy it, if you can tolerate one more food joke.) Thank you for posting the link and doing the testing. I don't have a (capable enough) Mac to try it out on either, but if you do have success I wonder if it would help for you to post a simplified gist somewhere so someone who does could try it out. |
Trying to come up with a good test for display as well. But so far, with just 333,333 points there's no time difference in packing cl arrays from either origin vectors or 3d-vector structs. From vectors uses slightly less cpu, but I probably need more points, since this is all taking ~0.004 seconds. .020 using generic functions. |
so here's just some basic testing. If you make smaller arrays then origin's lead widens. whether it's worth the trade off in not being able to dispatch on... |
Nice work. Is the cost of sending sbcl pointers and ffi arrays to OpenGL (and GPUs) the same? On a slight tangent, should we bite the bullet and go with double-float as our default? Or is the performance hit a serious one? |
i can't see why it would be different as they're both just pointers to memory. Unless being in sbcl's mem space somehow affects it. That's why I think an actual drawing test might elucidate further. at least just in terms of packing/repacking something over and over. I don't know if I've been reading out dated stuff, but what I've seen is that lots of opengl drivers will just convert to single as their internal format. The support for doubles for gp compute is relatively new and requires above 4.1 and in some cases a new card. I've seen figures of half to 1/3 of performance of singles. |
opengl is a foreign library
…On Wed, Sep 7, 2022 at 11:35 AM Johannes Martinez Calzada < ***@***.***> wrote:
I tried, but it reads like c and I don't see any lispy abstraction. The
only thing I see is direct writing of individual bytes to foreign memory.
—
Reply to this email directly, view it on GitHub
<#90 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AABGMMOZD4LKQ66EUNCPNELV5C74ZANCNFSM6AAAAAAQFI53LM>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
I'm a vulkan enthusiast, but I have too much on my plate at this time to
volunteer for porting opengl.lisp. I can provide vulkan bindings and some
sample code on how to make triangles and triangle_strips of various colors
render in vulkan, but I have the text engine to debug/extend and a whole
host of other projects relating to other things. Perhaps someone who
doesn't necessarily have vulkan experience could volunteer. Vulkan's not
that hard.
For the MacOS platform, I'm working on cl-metal using a objective-c bridge
from fiddlerwaoroof. Vulkan does work on mac, but it doesn't support
compute shaders yet, so I'm going the Metal route rather than wait on
MoltenVK. Metal and Vulkan use different shading languages so it would be
great if someone could work on a (possibly CEPL-based) lisp syntax that
could be compiled to either GLSL 4.5 for vulkan or the Metal shading
language.
…On Tue, Sep 6, 2022 at 3:57 PM Kaveh Kardan ***@***.***> wrote:
I am very keen to maximize use of the GPU as well as SIMD and multiple
cores. I really want our system to be able to handle production-level
datasets with the same (or better) speed as commercial packages.
How we architect this (improved OpenGL interface, Vulkan, compute on GPU)
is something we should discuss.
If we do have a Vulkan enthusiast, a first step could be to implement the
equivalent of the code in opengl.lisp.
Also, one of my goals is to develop a cross-platform GUI toolkit.
Currently we're building it on OpenGL, using the text engine by @awolven
<https://github.com/awolven> and font rasterizer by @JMC-design
<https://github.com/JMC-design> .
—
Reply to this email directly, view it on GitHub
<#90 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AABGMMOZCHMQGEROWZLALPDV46V2FANCNFSM6AAAAAAQFI53LM>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
I volunteer to make an attempt this month. What do I need to know to start off in the right direction? (Either in absolute terms or based on the tiny start I made in #109 a ways back.) |
I'm interested in trying to write this. I will try to build on what @JMC-design has proposed and the text-rendering engine @awolven has written. It would probably make sense to reuse parts of the code of the text-rendering engine. In order to do so I would have a lot of questions, since there are a lot of things I don't understand the purpose of - it seems like a pretty advanced implementation to me which take a lot of nitty details of OpenGL into consideration, am I right ? Anyway, I'll start by proposing something and hopefully we can improve on it incremental after with your feedback. |
The text rendering engine is a two-implementation immediate mode hack[s] to
get Kaveh working with text. I say two implementations, because so long as
Kaveh uses OpenGL 1.1 for the rest of Kons-9, macOS will be a different
implementation than any "modern opengl" implementation used in Windows and
Linux. This is because the opengl 2.1 implementation of macOS is not
forward compatible with opengl 3+, unlike Windows and Linux. So there is
an opengl 2.1 and an opengl 3.3 version of the text rendering engine. The
least common denominator is opengl 2.1 and opengl 2.1 doesn't even use
shader programming. So to borrow your term "modern opengl"... a modern
opengl version of Kons-9 would require a rewrite of the logic in
opengl.lisp at the minimum.
This has been done before with vulkan, however Kaveh rejected the vulkan
branch and continued to make changes to the main branch until the vulkan
branch bit rotted. Kayomarz has volunteered to update the vulkan
implementation, but only has weekends to work on it and has not posted any
updates for that effort in some time.
By modern opengl, I am assuming you are talking about opengl 3.3+. Kons-9
is in need of a proper graphics engine to make developer's lives easier and
make the program scalable functionally, opengl or otherwise. A modern
opengl implementation would be based on now decades old tech and would be
essentially be reimplementing the logic of the vulkan engine (called
"krma"), which can render thousands of text characters without so much as a
blip in the frame rate unlike the immediate mode implementations currently
in Kons-9. So if you want to upgrade kons-9 to a modern opengl version for
GLSL programming and you have little OpenGL or Common Lisp experience, you
would basically just be spinning your wheels...for a lot of reasons.
First, this type of work takes knowledge, and second, if there is some kind
of absolute insistence on using openGL instead of something newer like
vulkan, you're better off adding that capability to krma and letting
Kayomarz finish porting opengl.lisp to krma, which would allow for kons-9
to support both opengl and vulkan.
As far as "modular", krma is modular and you can add and remove pipelines
while the program is rendering.
Furthermore, while krma fully supports the immediate mode rendering
paradigm of kons-9, in the long run one will want to support retained mode
paradigms, for performance, which is going to require somewhat of a
reorganization of kons-9, unless you live in a cold cabin and need your PC
to double as a toaster oven.
…-Andrew Wolven
On Sat, Sep 23, 2023 at 5:28 PM Théo Tyburn ***@***.***> wrote:
I'm interested in trying to write this. I will try to build on what
@JMC-design <https://github.com/JMC-design> has proposed and the
text-rendering engine @awolven <https://github.com/awolven> has written.
It would probably make sense to reuse parts of the code of the
text-rendering engine. In order to do so I would have a lot of questions,
since there are a lot of things I don't understand the purpose of - it
seems like a pretty advanced implementation to me which take a lot of nitty
details of OpenGL into consideration, am I right ?
Anyway, I'll start by proposing something and hopefully we can improve on
it incremental after with your feedback.
—
Reply to this email directly, view it on GitHub
<#90 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AABGMMIIV6XNHIJ2SI3J47DX35PBFANCNFSM6AAAAAAQFI53LM>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
I used to render movies on my Mac Dual G4 only in the winter in Colorado, b/c it used nearly 1500W, like a hair dryer (which would have been quieter).
Retained mode caching in OpenGL-based scene graphs usually used "display lists". What method exists to do that now? |
I see. I could also join the effort of porting kons-9 to krma then, if this makes more sense. I'm mostly interested in having a rendering engine I can understand and modify on the fly. If krma can fulfill this role, I'm in. About the modularity of krma, how would you do things like offscreen rendering, multiple passes? How would you create and load custom pipelines? Having some simple examples would be nice. |
I have the feeling anything I say here is going to get me in trouble with someone. Adieu. |
Could krma evolve to become something like CEPL for vulkan? Because that's in the end what I am looking for: a CL interface to a graphics API. Not just the bindings of course, but an interface that make programming OpenGL or Vulkan in CL more natural |
+1 Adieu to this topic. |
Use vertex arrays and the like to speed up the current naive drawing code in
opengl.lisp
.The text was updated successfully, but these errors were encountered: