This is a scene-based rendering API built on top of wgpu.
A basic scene can be created like this:
|
|
|
|
|
|
fn main() {
let scene_width = 1000;
let scene_height = 800;
let mut scene = qsr::Scene::new(1000, 800);
scene.set_camera(qsr::camera::Camera {
eye: cgmath::Point3 { x: 0.0, y: 16.0, z: 32.0 },
target: cgmath::Point3 { x: 0.0, y: 0.0, z: 0.0 },
up: cgmath::Vector3::unit_y(),
aspect: scene_width as f32 / scene_height as f32,
fovy: 45.0,
znear: 0.1,
zfar: 1000.0
});
scene.add_light(qsr::LightNode {
location: [15.0, 10.0, 0.0],
color: [1.0, 1.0, 1.0],
});
scene.create_node()
.with_model(qsr::ModelSpec::ObjFile { path: "resources/aircraft/aircraft.obj", texture_path: None })
.with_transform(qsr::Transform::Scale([3.5, 3.5, 3.5]));
let _ = qsr::driver::Driver::run(&mut scene);
}This will pull up a window with the scene rendered.

This .obj file is rendered per-model and attaches the correct textures to each model based on what the file specifies.
You can add more nodes to the scene graph like this:
fn main() {
let scene_width = 1000;
let scene_height = 800;
let mut scene = qsr::Scene::new(1000, 800);
scene.set_camera(qsr::camera::Camera {
eye: cgmath::Point3 { x: 0.0, y: 16.0, z: 32.0 },
target: cgmath::Point3 { x: 0.0, y: 0.0, z: 0.0 },
up: cgmath::Vector3::unit_y(),
aspect: scene_width as f32 / scene_height as f32,
fovy: 45.0,
znear: 0.1,
zfar: 1000.0
});
scene.add_light(qsr::LightNode {
location: [15.0, 10.0, 0.0],
color: [1.0, 1.0, 1.0],
});
scene.create_node()
.with_model(qsr::ModelSpec::ObjFile { path: "resources/aircraft/aircraft.obj", texture_path: None })
.with_transform(qsr::Transform::Translate([-8.0, 0.0, 0.0]))
.with_transform(qsr::Transform::Scale([3.5, 3.5, 3.5]));
scene.create_node()
.with_model(qsr::ModelSpec::Custom {
name: "tree",
geometry_path: "resources/meshes/tree.obj",
material_info: MaterialDesc {
diffuse_texture: Some("resources/materials/default_grid.png".into()),
..Default::default()
}
})
.with_transform(qsr::Transform::Translate([8.0, 0.0, 10.0]))
.with_transform(qsr::Transform::Scale([0.5, 0.5, 0.5]));
let _ = qsr::driver::Driver::run(&mut scene);
}Resulting in an image that would look like this:

Note that the new object is loaded from an obj file like before, but it is specified using a qsr::ModelSpec::Custom which allows for splitting the material and geometry specs rather than having them be in the same file.
This renderer is a personal experiment to play around with different architectures and rendering techniques.
Right now, this is using a deferred rendering architecture where each renderable first writes all the geometry data (albedo, depth, normals) to gbuffers, and later render passes load those textures to perform their operations.
The renderer is based around a central qsr::gfx::Context object which interfaces the renderer with GPU operations.
Textures are created through the context, and are managed by a TextureRegistry that stores the textures and distributes handles to them.
By storing, the TextureRegistry keeps the wgpu::Texture and wgpu::TextureView objects.
When a texture is created that is the same as a previous one, it is deduplicated to prevent excessive resource use, and a handle to the existing texture is given.
This is still in very early development. The goal is to put more and more builtin renderpasses to the render graph. Here are the immediate ones:
- Alpha Forward Pass [STATUS: done]
- Normal Mapping [STATUS: partial]
- Phong lighting [STATUS: partial]
- Bloom [STATUS: not started]