A journal of process. The previous incarnation of this journal is available here: https://github.com/pippinbarr/tankses/wiki/PROCESS
2018-01-26 17:42 in which I inaugurate the process journal for the new repository and take some stock
I am sitting at the registration desk for the Global Game Jam 2018 at TAG.
You will not have noticed this, but I have started a brand new repository and associated wiki for the Tanks!es project. That's because
- The technical setup was getting untenable in terms of mess with the original repo
- The conceptual setup was getting untenable in terms of the original positioning
So here we are, back at zero. Except we're totally not at zero because I have learned from all those experiences. (And, jokes aside, there's plenty of good thinking and direction in the earlier repo and I'll be referring to and extending on those ideas.)
The technical setup
Now it seems to me that the technical setup should be in a considerably better state moving forward. I committed a working version of the project with a nice(r) AI setup and a few of the same tweaks as previously like getting rid of Realtime GI and PostProcessing on the camera to save some space. In fact the project folder is all the way down at like 60MB. Which makes me feel like a success?
Moving forward it's entirely plausible that the technical thing will fall apart again simply because Unity is a bit of a beast to work with and there are multiple learning curves taking place all the time. At any rate, I don't have significantly different plans for the technical approach other than to learn from my "mistakes" of the past and to keep trying to be a good person.
The conceptual setup
More "serious" right now is the state of the conceptual setup of the project. I've been doing a lot (and I do mean a lot) of writing about it, but never quite nailing down what the parameters are for making variations. The idea of manifestos has been kind of helpful within this, and I think I've genuinely done some good work in terms of grappling with a bunch of the attendant issues, but it's definitely still... difficult.
I will write more about this later - I don't have the headspace right now.
2018-01-27 12:06, in which I tackle the conceptual side for the umpteenth time, resulting in the realisation this is two games: tanks!es and !tanks! (or tanks!?)
The core problem with the project has never been the technology (however painful it has been), it's always the question of concept.
There are two "big ideas" fighting for supremacy it seems
- A conversation with the ontological materials of Unity
- A conversation "about" Tanks! probably including violence as a "concern"
It has felt on many occasions like doing both of these becomes problematic.
The ontological conversation feels as though it needs something extremely formal in order to really be true to its nature. That would be a series of very strict alterations to Tanks! that focus on the qualities of the ontological entity in question (e.g. Light). That is, what happens with no light, what happens with reflective materials, and so on. The problem here is that "what happens?" question seems quite flaccid. The design seems absent in this conversation and it becomes "just" manipulations of technical details in service of nothing?
The Tanks!/Violence conversation feels important, but also an easy way to slide into cliche or cheap ideas. The kind of "ha ha, you can't kill each other" idea feels trite at this point. (Note, though, that the students in CART 415 are often working around ideas of non-violence or other-than-violence very successfully - it's not that it can't be done, but the many small variations versions feels like it misses out on subtlety.)
Emblematic of this problem has been the name itself actually. Tanks!es: Light skews hard to the ontological side and really makes no sense in terms of the conversation idea other than variation.
I've been wondering about positioning the game in terms of negation. Then I could call the game !Tanks!, pronounced Not-Tanks!. Negation is a more formal way of having a design conversation on top of the ontology stuff. How can the ontological materials be used to negate the original game in some way. Not necessarily as a commentary on violence so much as a conversation with the material of the game itself.
A risk with this is that it's perhaps to easy to negate a game? And it's questionable whether it's interesting to do so. It does evoke something like Greimas and the semiotic square which could be a powerful and interesting theoretical basis? Not just pure contradiction but negation and non-contradiction?
Idea complexity, metaphor, design
An oddness here is that I suspect a more formal approach (say negation plus ontology) will give me a firmer footing but quite likely won't allow for the more metaphorical versions of the game? Like, would the A little privacy game about having sex in the shadows make sense as a "negation"? I suppose it would, but there's a sensation of it possibly being over-elaborate in the context of more formal experimentation? And if the negation is more about transforming war into love(making) I'm not seeing where Light comes in specifically - you could do a more more direct version of this that doesn't emerge from Light.
Design "inspired by" the ontological entity leads to more creative potential, but strays from a strict approach to the work. Design strictly adhering to a formal system resolves that, but may well lose the potential for interesting and insightful design.
What is the most important thing to me here?
The most important thing is to feel that I am pursuing a conversation with materials with each of the ontological entities of Unity.
I chose Tanks! as a base platform because it's representative of entirely straight design and thus in a sense comes across (to me) as a kind of neutral platform to work on top of.
Part of the problem of Tanks! specifically is its obvious non-neutrality in terms of my personal design and ethical beliefs, creating a tension I have been (semi-consciously) trying to resolve by destroying or "commenting on" the base game through the conversation.
In dealing with ontological entities in Unity there is a key difference between what that entity can mean/signify (e.g. Light can mean goodness, heaven, performance, the passage of time) and how that entity affects gameplay/function (e.g. Light can increase or reduce visibility, make it selective).
There is a tendency to think that the more formal approach centres on function and not signification, but as I write this I don't know if I believe it.
The technical and the aesthetic
This feels like me circling back to my original position that the ontological entity can be considered metaphorically, functionally, etc. without penalty.
Is it "just" that a collection can have both coexisting? There can be variations which are purely addressing formal characteristics of Light (no light, maximum light, fog, torchlight) and variations that are addressing signification and metaphor (walking into the light, do not go gentle into that good night, the light of a tv screen with a tank watching tanks).
The extremities and demonstrations of function and parametrisation of an ontological entity (like Light) are important for demonstrating ranges of capacity of the engine in a "neutral" setting (the base game).
The metaphorical explorations of the ontological entity reach more for implications and meaning of the ontological entity and how the base game/components can shift in meaning through a (primary) focus on it.
Is it possible that you would even have a single variation that focuses on exploring the various affordances available on a timer? Like a cycle through ten second experiences of different parameterisations of the lighting of the existing game? No light, moving directional light of varying brightness, ambient life cycling colours, etc.? Even procedural variations on those lights? (Would you make a variation of parameterisation for each light that already exists? e.g. for the shell light, the sun, the ambient, the particle effects? Or for each possible form of light the game could involve? In which case absent lights like a spotlight are brought in? Fog is brought in? But no area lights, no probes - they don't work in my setup.)
Note that there isn't a video player in the core game for example, so there are instances where you can't just make variations on existing components, you have to figure out how to position them in the scene and then display their possible variations.
So for light it's Ambient, Directional, Point, Spot, Fog, for example. Those can be explored through their parameterisation. The only kind of lights that aren't there are Spot and Fog, so you would need a way to include them in a "neutral"(??) way and then run through their parameterisation?
Is this two games?
Do we have
Tanks!es with variations on every ontological entity? Literally a "tech demo" of the engine itself? Perhaps each category has a game and within that it cycles through parameterisations? Or every subcategory does that? And where possible it's about parameterising the existing materials? Not quite sure how to divide up this pie.
!Tanks! which uses the same ontological affordances but uses them to jump into negations, contradictions, problematisations of the original Tanks! via metaphor and actual design? (e.g. Into the light, do not go gentle, make-out-not-war, and so on?) Or frankly doesn't even do that in terms of the ontology? But I feel like it should because this is still another way of being in conversation with those ontological materials?
I feel like there's a perverse pleasure to Tanks!es and just repeating the same game but being focused on the engine itself, breaking the idea of a consistent world? Plausibly you could have the Atari style thing of flipping through the different "channels" of the game.
I think it's okay if !Tanks! doesn't explicitly leverage the ontology in its packaging - like it doesn't need to have subsections according to the ontology, even if that's the actual design stance I take behind the scenes. Whereas Tanks!es would wear that on its sleeve, the sectioning and game titles would explicitly revolve around the technical categories.
It seems to be two games.
I'm more and more comfortable with the idea that this is two games. The next question is which am I actually working on? I suppose the answer is "both to some extent" given that they begin with the same materials. But I still need the active project itself to be identified, and it makes sense to me that it would be Tanks!es that I'm working on first because it's more of a frank engagement with the tools and would put me in a better position to then approach the similar task metaphorically etc..
Questions that arise once we acknowledge this as a "purely" technical conversation with materials:
- Is this a series of separate games or one game with a menu system?
- Is there a separate game per base concept (Light, Camera, Effect, ...) with a collection of approaches to the subconcepts involved (e.g. for Light we have the kinds of lights, fog, ambient, etc.)?
- Is there one mega-game which displays variations of all possible entities at once?
- Is there the half-way zone where Light is a single "level" that cycles through the various parameterisations of the subsidiary parts?
My inclination is toward individual levels tackling individual elements to the extent that they can be addressed. So you have
- Light (Spotlight, Point light, Directional light, Fog, Ambient light...)
- Camera (Perspective, Orthographic, Don't Clear, Viewport, ...)
- 3D Object (Sphere, Cube, Cyllinder, Plane, ... Ragdoll, Wind Zone, Tree, Terrain, ...)
- Effect (Particle System, Trail, Line)
- UI (etc.)
- Audio (Source, Reverb Zone, Mixer, Effects)
In which case I get the sense that it's better to have separate builds for each concept. Tanks!es: Light, Tanks!es: Camera, Tanks!es: Audio. Because otherwise we might have a pretty overwhelming menu system? Or perhaps not, maybe the menu system is fine.
Further to this, it's not impossible this could be a real time thing in which you allow the player to alter the parameters or to turn specific elements on and of during a single version of play? Or that there is a randomising element involved such that each time you play you get some set of elements and parameters?
So there's a format question
There's a serious question in here of just how you format the possibilities because there is
- The presence/non-presence of a specific entity (spotlight, slider, whatever)
- The parameterisation of that entity (brightness, angle, range)
- The possibility of multiple simultaneous entities [leaning to no here because I think it muddied the waters?]
This returns us to the question of intent.
Let's circle back to our original idea.
The project is a conversation with materials where the materials are the ontological entities of the Unity game engine. The fact the game we use is Tanks! is, in this sense, kind of irrelevant except as something which throws into relief the presence/(absence?)/alteration of ontological entities.
But there's a lurking question here around the interest value of such an experiment. Yes I can do it (with some more thought), but what would be the benefit/interest in experiencing the game that results? You'd be offered the same game over and over again with variations on a specific element and as such have your attention drawn to that element.
To draw attention it would be important to either randomise the parameterisation or to change it during play I suspect, and probably the later to avoid cases that are too similar to the original. So a Fog level would have the fog tick through different possible forms of fog every n seconds (n=10?). And as you "played" you would necessarily spend more time noticing and negotiating with the changing imagery of the level rather than the game itself.
I think that makes the most sense. In which case there would be one level per specific idea (Fog, Directional Light) and their parameters would be all randomly set every n seconds, resulting in new visual/kinetic/audio understandings of the underlying game. Or rather than stable nature of the underlying game allows for the perception.
That also sounds "easy" to make, and kind of worthwhile.
So that's what we're making. Then we turn out attention to !Tanks! after that. (not-tanks).
2018-01-27 17:21, in which I reflect on the hyper-formal approach to Directional Light
This evening I followed through on the discussion above to build a scene that works with the parameters of Directional Light. I tried two approached
Jump cuts where target parameters are randomised for "all" properties of a directional light (including cookies, though I'm not so sure about them) and they are jumped to on a specific interval, regenerated, and so on. It yields a deeply frenetic and kind of visually upsetting experience.
Lerping between random parameter sets. Smoother, lets you see things like shadows travel across the structures. Satisfying in that it captures the "analog" nature of the parameters.
The question here is what technique (including techniques untried or un-thought-of) make the most sense/communicate the best in terms of the core idea of a formal exploration of a specific ontological entity (Light) and a specific expression of that entity (Directional Light).
- Lerping feels meaningful in terms of making visible the continuity of the properties. Jump cuts are pretty striking, though, and allow for extreme juxtapositions.
- The profusion of parameters is daunting - changing them all at once (whether lerping or jumping) is pretty confusion and busy - there's shadows moving, light levels changing, shadows jumping, colours shifting, cookies appearing and disappearing, and so on. it's a lot to take in and perhaps too much to take in in a way that feels meaningful?
You could change them one at a time (in a sense this would be like the kinds of realtime manipulations you can perform in the editor itself actually, which might be a nice metaphorical way of thinking about this? Thinking about these things as miniature acts of design?). In that way you might have it randomly choose a parameter, then choose a random amount of time to lerp to a random target value. That would have the appearance of these things being adjusted by some kind of "intelligence"? You'd have more of a sense of being inside a still-in-progress development where it's trying to get something "right"?
Changing them one at a time might help a bit more in terms of comprehending them.
This also opens up the possibility that one would also display the parameter being edited live on the screen somewhere, like a subtitle, including the actual current value of the parameter as it changes, even a little slider or something as it changes... that could be kind of beautiful to look at?
Okay I think that's the next approach to this. Okay. That's a good place to go from here.
2018-01-27 23:00, in which some quick bullet points about representing modifications of ontological entities
- Need a way to indicate the currently manipulated property (a slider might be interesting where relevant, at the very list the name of the property and the currently set number which can then change over time/lerp)
- Consider whether we need the possibility of adding or duplicating the entity under consideration - especially important in situations where there isn't one already in the scene, like a Wind Zone or a point light?
- Need a way to indicate which example of an entity is being manipulated at this moment in the case of multiple - such as meshes (a bounding box?)
- What are the limitations (if any) on properties that can go to ridiculous ranges like the transform - does it make sense to reposition a selected tank, say, 1000 units off screen? It may or may not. Certainly it should be possible to position things outside the bounds of the level? (Should it?)
- Many of the entities can be thought of as transforms and (maybe) bounding boxes, but not all of them - tricky to draw 'gizmos' for things like lights, cameras, and other things?
That's it for now!
2018-01-28 12:08, in which I get more detailed about the requirements of this representation of parameter changes issue
I'm currently pretty sold on the idea that what should happen during play is
- an entity is added if relevant (e.g. Wind Zone)
- an individual entity is selected (if relevant)
- a parameter is selected
- a target value is generated
- the parameters is lerped to the target value over time
One 'solution' to entities that don't already exist in the game is to add a completely vanilla one ahead of time (utter default settings) and then manipulate that. That might be more acceptable and feels pretty neutral as an act? (Though I guess you can query whether or not the interactions/relationships between multiple instances of an ontological entity are vital to understanding? Could be.)
- Need to be able to know all possible entities to select (relevant where there are multiple, e.g. 3D objects!)
- Need to be able to indicate the selection? You could not do this, but it seems like it could be disorienting to the player.
- A "simple?" option would be to indicate selection textually if entities in the game have unique names or ids? (Ideally names)
- The more complex visual approach would be to create a bounding box around the selected entity where relevant (though some entities, like a directional light, don't seem like they would have that?)
- Need to know all parameters < this seems like something you would hard code?
- Need to be able to indicate the selected parameters < presumably text on the screen with the parameter name?
- Need to know the type of the parameter (float, texture, bool, enum)
- Is it desirable to break all possible parameters down to their smallest unit? (e.g. ColorHSV can break into three floats, so can rotation, there are also parameters that have literally no effect like the rotating around z for a directional light? Though that affects subsequent rotations I suppose. Gimbal lock? Fuck off.)
- This is based on the type chiefly of course
- But there's an open question here concerning whether I should delimit possible ranges? There are a lot of "uninteresting" possibilities for, for instance, the x position of the transform, since the vast majority will position an object off camera
- The problem is that the "true" version of this would be to not worry about that, but the result would be everything pretty swiftly vanishing, rather than visibly changing
- Seems like I need to specific delimits for at least some things, avoiding it where possible, and ideally having a strong answer for the 'why' (e.g. it should be possible to go off camera, but once off camera there's no point in going further?)
- Need to do the correct lerp for type
- Seems like should choose a random speed of lerping as well, so that there's a more procedural and 'human'/designer feeling... not having everything moving at the same speed
- Some kind of UI indication of the lerp as it happens is important
- Base case is just a textual representation (a float changing, a bool flipping)
- But something like a slider would be a good representation for a float for example and would evoke the UI of the underlying engine to an extent...
- It goes on forever.
- Is this the kind of thing where I should be building some generic system that lerping arbitrary values and displaying arbitrary UI elements on screen to reflect them?
- It's perhaps the case I should do a specific version for Light and then see whether I can reverse engineer something successfully generic?
- Generic would be particularly great for things like random selection? An array of a parent class using polymorphism to select specific subtypes which can then set target values, display, and lerp on their own under generic control?
- IDs for elements and whether can get them as text (ideally with the actual name of the element?)
- Bounding boxes in debug style (have seen information about this)
- Addressing parameters based on a string of their name?
2018-01-28 20:09, in which I try to think through the one-parameter-at-a-time version of this thing
- If I was to handle one parameter at a time I need to be able to randomly select the parameter during runtime.
- One option would be a giant switch statement that handles each specific case, that could be the easiest and would certainly "just work" but it's pretty fragile in terms of making changes?
- Another option would be to (somehow) store parameter names in an array, choose them, and then call an associated method that would act on that named parameter... then you could hypothetically have one method per type rather than a method per parameter? So a LerpColor a LerpFloat and so on, with specified limits etc. as needed? The method would set a target value (in the range, and then lerp the specific parameter to that value, hopefully while also handling UI display appropriate to that type (e.g. a slider for a float, a name for an enum, a set of RGB values (sliders?) for a color, a texture name for a text, a true/false and checkbox for a bool?...))
- The beauty there potentially would be maybe being able to have a separate class that handles displaying and lerping those types which could be then used throughout the game, rather than level specific sets of methods?
- Buuuut my head keeps bouncing off this
- I've been looking into various Reflection abilities in C# that seem to make some of this possible in terms of pulling out getters and setters for specific properties... I understand a few of these things now.
- I really didn't expect to be thinking about Reflection, but here we are
- So would it be like
[ ["color", "LerpColor", [0.0f, 100f]], ["intensity", "LerpFloat", [0.0f, 10f]], ["rotation", "LerpAngle", [0, 360]], ["shadows", "LerpEnum", [Hard, Soft, None]], ["shadowNormal", "LerpFloat", [0.0f, 2.0f]] ]
- No not really because you can't have arrays like that in C#, that would be some JS shit
- Could be just arrays of strings?
- Maybe that could work? Something to try.
2018-01-31 17:24, in which I reflect on the state of the code, CART 415's reaction, and next steps
So I did get the reflection stuff working to a point where I have this capability with Directional Light
- The code can randomly choose any float based parameter, select a random target, and then lerp that parameter to the target, all based on a relatively simple data structure representing the parameter name, and the min and max values, along with the idea of a specific 'selection' for the manipulation.
This means that current the lerping is happening on a parameter by parameter basis rather than all at once, which is what I assumed I wanted because it felt truer to the human experience of changing parameters and observing their effects.
However when I showed this version (as well as other versions involving jumping between parameter sets or lerping between all parameters at once) to the class, it was pretty clear than the one-by-one lerping was a touch underwhelming, however sophisticated its lerping code. Some of this may simply involve the fact that colour and transform aren't being manipulated in this situation, however? Those are two of the most dramatic parameters.
There were a bunch of suggestions from the students, including the ideas of
- Player-controlled parameters (e.g. sliders that the player can manipulate)
- Representing the parameters with sliders but having all the sliders present at once for a kind of visual confusion that backgrounds the actual game (and to some extent even the actual effect of the sliders)
- Having a combination in which the player can adjust the sliders in competition with the random agent
- Having the sliders/parameters be adjusted indirectly through the motion of human (and AI?) tanks in the scene
- Having pre-determined parameter sets to lerp toward rather than having it be totall arbitrary all the time
These are all ways to make the experience more dynamic in terms of the relationship between parameter and effect, which does seem necessary.
However (again), I also want to remind myself and stay true to the kind of formality I see as underlying the work, which is a kind of purist manipulation of parameters in the game such that the (sort of) arbitrariness of the parameterisation makes sense (and also perhaps the ways in which the available ranges of parameters are kind of strange (like who ever has as intensity 100f light?)). On that front I'm reluctant to have player control of the parameters - it strikes me as being too close to the actual act of making a game and feels distracting from the idea of parameters as arbitrary (it would be too tempting to construct meaningful parameter sets?). And I'm similarly reluctant to make a designerly choice ahead of time of pre-defined parameter sets that are "interesting" - again, that seems to move away from the arbitrariness argument... the kind of "neutral" stance that random numbers can take toward the parameters.
I do wonder, however, why I feel it's necessary to move away from lerping all parameters simultaneously. There's no clear reason for this other than a separation of disctinct parameters so the player/user can understand such parameters exist? A computer agent does need to adjust one parameter at a time? And, again, the visual effect here might be underwhelming in a way that means that multiple parameters is "better"?
But one of the reason it think I was drawn to the one-at-a-time is the relative simplicity of the UI add-on for showing what's happening. Obviously you can see the actual impact, but I also have this idea that you would see a second-order representation of what is changing - such as text listing the parameter and object and its changing value (or a slider, say, representing that). That's fine for one parameter at a time, but would clearly get out of hand if you do all at once (and you're talking about 10-50 parameters (particle systems, for example, have a truly insane number of parameters available to them!)).
One option/compromise would be to display them all but to have that take place in a separate pane of the interface, so that it's not overlayed on the game itself? The problem is it starts to pull me into user-interface design issue I don't particularly want to be involved in?
A "simpler" solution could be to adjust the parameters much faster... like one second per lerp, say, so that you have drastic effects in small amounts of time, but don't change more than one thing at once? Could even make an argument for randomly adjusting all the parameters exactly one before cycling back to re-adjust any of them? That way you're guaranteed to see the more "interesting" (obvious) ones in a short space of time? Or even 0.5seconds? Etc.
I think I need to try that one-by-one mode first before I start commiting to any idea of adjusting multiple parameters individually, just to dodge the potential UI nightmare. If I need to face the UI nightmare I will.
Next task, then: add remaining lerp functions and associated data into the existing directional light and play with one-by-one timings/configurations to see if there's something stimulating.
2018-02-01 11:55, in which I write about progress in lerping code and thoughts on artificially adding objects (or not)
I got my lerping working for
- rotations (Quaternion)
I feel quite proud of myself in terms of the genericity of the code and conquering things like the enums. Also set it up so it can select an arbitrary instance in the scene of a specific type and then work on that, so the door is open toward cases where you want to tweak parameters on any of a number of instances (e.g. 3D objects will be big for this).
Now with colour and rotation
Now that I've added colour and rotation to the lerps for the directional light (and also, actually the shadow mode and render mode), it feels much more dramatic to look at. I have the interval really tiny, admittedly, but it doesn't feel dull. The text-only display is pretty bland so still need to think about how to display that.
It seems more okay to do one parameter at a time rather than multiple. Will continue to monitor that though.
A big question I'm still on is whether or not I should be adding entities to the basic game in order that they be there to parameterize. That is, there are no trees in the game, so do I add a tree? There are no wind zones in the game, so do I add a wind zone? And on and on. There's no video of any kind... and so on.
My default assumption had been yes for this. Because I wanted to give that experience of the parameterisation of all possible entities. But I'm less certain about it now just because it feels a bit inelegant to just drop random things into the scene like that, quite disruptive and strange.
On the other hand, disruptive and strange is not the worst thing. I guess the thing to think about is the strategy for how this happens. If, as I've been leaning, it's each super category that has its own scene in the game with parameterisation, then I'd have to add a spotlight to the light scene, for instance. And in the 3D object I'd have to add, presumably, a sphere, a cube, a cyllinder, and so on.
At which point you start to ask: well, when do you add them? Right at the beginning? Is there a random chance of adding one of these entities as opposed to changing the parameters of something already present? How do you set the likelihood of adding a new thing versus adding something else? And what about weird shit like whether or not something should have a rigidbody? (I guess here we need to just instantiate things with whatever their default shit is?)
So, I can see how a version of 3D object (it stands out to me) could cycle around between
- Choosing an existing 3D object (a mesh, say) and altering parameters for a while, then switching - or even just one parameter
- Spawning a new 3D object (a sphere, say) and doing parameters on that
What the fuck would this look like? Should each instance have its own busy little parameter changing script? Perhaps it should actually.
So I suppose the thing to try next is:
- Create a point light script and attach to the prefab of the shell (which has a pointlight)
- Create an ambient light script
- Create a super script that occasionally decides whether to add one of the light types (or should it only ever add a spotlight? or??? Seems it has to be arbitrary, in which case you just might not get a spotlight for a while?)
- Or do you just add every type right at the start that isn't already present and then go from there? Interesting question.
2018-02-01 15:51, in which I'm a little further on with this so will close the day with a few more thoughts about the "personality" beneath the parameters
I managed to implement lerping for spotlight and point light. It works. With provisos:
- Obviously there's not a spotlight in the scene in the original game so for today's purposes I added a "default" spotlight. I ended up "needing" to cheat to make it visible by placing it higher above the ground than it defaulted to. Feels like a cheat. With lerp interval small enough, it was definitely wild and swinging around in a way that made it apparent. Not really clear it fits well in to the idea of the parameterisation of light in Tanks though.
- There's a primacy question here: is this more about Unity or is it more about Tanks! the game made in Unity? The artificiality of putting in the spotlight felt wrong to me. I'm inclined right now toward only fiddling the parameters of really existing elements in the original game, even if that means sacrificing being able to play with certain elements.
- Point lights work but because they're short-lived (the duration of a shell's life) it's possible for them to not really undergo any significant change. I thought it would be better to have running properties applied to each shell, but that's a real hack of the concepts of the game, so perhaps not. I "got around" this by having an ultra-short interval of lerping on the point light, so it's more likely to change radically in a short time, but it's not clear to me how that relates to the time taken on the directional for example.
Overall there's a hanging question about the "personality" of the changes being made by the random function.
- Why would it be one speed rather than another?
- Why would it choose one parameter rather than another?
- Why would it lerp one parameter at a time as opposed to multiple?
- Why would it choose to omit or not omit parameters that exist in code but that have essentially no visual/discernable outcome in play?
Many of these questions veer into the question of whether this is "for show" and I'm trying to make visible parameters maximally visible. Versus a more "internal truth" of the parameters in which I'm changing the parameters (for real) and come what may in terms of the actual visual (auditory, kinesthetic, ...) outcomes of that.
This requires serious consideration. This is manifesto territory I suppose.
Is the "completely insane" model of change, where it looks whacky and clearly unpleasant/disorienting the "best" way of drawing attention to the artificiality of these parameters? Or is it secretly overly sensationalist and thus actually distracting from the purity of the idea?
More news at 6.
2018-02-07 16:21, in which I'm still thinking about the player experience of the ontology via lerping
- The completely insane version where you lerp every parameter at once is very dramatic in terms of showing you what it's doing. But it becomes impossible to also expose the actual parameter in a human readable form, because it's all of them.
- The one-by-one approach works, but there are various "uninteresting" parameters that don't impact on the scene, so that you end up not really seeing anything actually happen. It's more "honest" and allows you to actually create some kind of UI indication of the parameter changing to highlight it's happening. (Notably shadow bias and shadow normals and so on are totally uneventful. But maybe that's also important/okay? I'm trying to be honest about the parameters right?)
- Occurs to me that one way to avoid a key problem would be to lerp parameters away from their value and then back to where it is by default? So shadows would be turned off in the sequence, but turned back on at the end. The light would lerp to a weird colour, then back to the default. The angle would lerp away and then back, and so on? That would have more of a "look, I'm showing you what this parameter does in this scene" effect.
- But it would also lack a kind of cumulative madness that would satisfy me.
- But yes, there's an appeal to the computer kind of studiously demonstrating the different effects of the parameters. "Look, I'm changing the colour now. Now I'm changing the angle of the light. Now I'm turning on and off the shadows a few times. Etc. etc."
- That seems quite 'pure' while also giving some kind of guarantee you don't let the randomness leave you in states where a bunch of other changes become totally non-visible (e.g. you rotate the light such that it's night with no shadows, then proceed to do a bunch of shit with shadows)
- Will it feel too dry if there's no cumulative weirdness with parameters?
- Wish I could have a computer voice announcing the different changes as they happened.
- How would this fit when you're lerping the point lights of shells? it's so rapid as to be basically invisible during play. Maybe for such impermanent elements you constantly lerp parameters for an invisible version and apply those specific parameters to the extant ones each frame? So there's a 'platonic' version (basically the prefab) which you manipulate. Could be some hideous performance hit I'm not thinking about properly, but maybe it's okay.
So, to do:
- Create a version that lerps away and back
- Create a version that has a 'platonic point light' being consistently lerped in the background and work out how to apply its parameters to instantiated pointlights on shells
- Contemplate how you would display these things as UI
2018-02-08 12:30, in which I took one step forward and two back?
I managed to get the idea of lerping away and then back to a default value working out alright. Not especially difficult. Definitely made the feeling of the changes more consistent and understandable as changes to the status quo, because the status quo doesn't sort of vanish immediately with the first changes into madness. So it definitely felt more like adjustments to this game rather than a completely random things.
So that felt good but then
Totally couldn't work out how to make a version where there's a constant lerping of ALL point lights and then having that apply to point lights as they're generated (on shells). Also occurred to me a little later that that would set a shitty precedent too given that it would imply, for instance, that I would do the same to manipulations of all 3D objects... e.g. have an Ur-object. Rather than selecting them one at a time or having each one lerp on its own timetable. There are more problems with that of course, notably any ability to show such a thing in the UI - if you are lerping all 30?+ objects at the same time you can show that on the screen. But if you lerp them one at a time you can. But then if you lerp one point light at a time, how do you pick it? There are problems.
Also just couldn't work out how to make a sufficiently generic lerper to handle both Light and AudioSource. It's possible there's a way to do this, but I'm kind of at the edge of my understanding of it all at this point. I do want to avoid repeated code, but it's starting to feel like it'd be better to just write custom classes that lerp every specific kind of thing rather than dealing with all this fucking reflection. Maybe there can be some Static elements that at least maintains an appearance of consistency among them? Ugh.
Anyway, I feel like I'm getting my ass handed to me. On the plus side the lerping of the audio, notably the pitch, was pretty fun and successful, so that's not the end of the world at least? But jesus fucking christ this is rough and difficult stuff. Ouch. Owch. ooouuuuuuccchchhchchchchchchchfuckckckckckckkcckck.
Okay, well, keep trying? I guess so. Jesus. Learning anyway. I'm learning. Don't like it, but I'm learning.
2018-02-16 14:22, in which I attempt to lay down the law about a specific plan for just making this fucking thing
So guess what, it's been a week since my last confession. I have a new plan. It may or may not make sense but this is it:
- Screw any kind of generics and trying to make beautiful consistent reusable code.
- Make specific lerpers for each kind of thing I want to lerp (directional light, ambient light, point light, audio source, 3D object, particle system...)
- Work out how to draw GL lines on screen to act as selection markers for specific objects on the screen (including invisible objects like lights)
- Lerp a single selection and parameter at a time, fairly rapidly, with selection and parameter indicated on screen
- Make them cumulative so that there's a sense of entropy involved
- That's it
That is the simplest version I can think of of this whole parameter-oriented thing. It should be pretty clear what it's about: selecting elements of the world and altering them. It should communicate that it's about the underlying reality (and fragility) of the game. It should impact gameplay relatively swiftly. It should be possible to bulldoze through a lot of this with specialised code rather than fretting about generalising and reflection.
The big questions I can see that may cause trouble:
- Scheduling which object and which parameter
- There needs to be a manager taking care of selection and lerping (perhaps the simplest thing it to have an array of every single lerpable object in the scene and just iterate through that?)
- The question of fleeting objects like shells (and associated model, lighting, particle effects, and SFX) < they barely exist long enough to adjust even one parameter, which may or may not make any difference at all, such that they will tend to appear unaffected or simple not be selected disproportionately
- A 'solution' to this was the idea of changing a 'platonic' version of these things (an ur-shell that is part of the changes and which is used to instantiate new shells instead of the prefab) - but it's a little unclear why this would be the approach - each instance in the scene is a game object, including the fleeting ones. If I went with the prefab idea it would be: a) fake underneath (I wouldn't be altering the prefab) and b) would mean this should apply to the tanks and various buildings as well
- And argument for that is the idea that these things are created from a single source and that changing all instances of a prefab at once is a kind of emphasis on the idea of artificiality too? So all the tanks change simultaneously for instance. But it wouldn't have the kind of strange asymmetry that seems intuitively appealing...
- Alongside this is the frank and honest acknowledgment that I've had no luck actually implementing this idea of changing a 'prefab' underneath in the first place (though I can see how I'd do it if I simple changed the instantiation code in the actual game itself - is that cheating or simple acceptable as a way of indicating what's happening?)
- The prefab thing feels like it doesn't make sense, I'm sorry.
- Well unless it's that any change that happens to be applied to an instantiated prefab... no that doesn't make sense
- And if so I'd be sacrificing: basically everything to do with shells and explosions entirely. Which would lose a lot of potentially magnificent particle effect weirdness? But the alternative is changing all tanks and all cargo places at the same time... which is also HARD to indicate with the GL lines idea of selection... you're not selecting an instance (or not always) but rather a platonic ideal of one, which doesn't exist...
- Unless in selecting an object that comes from a prefab you show selecting that one, but affect all prefabs in the scene? OR we go with the idea that only prefabs that were instantiated later would be affected... so tanks WOULDN'T change...? BUT they would... and it's weird because there are multiple tanks, but a shell exists in its own moment...
ANSWER: Start by going only with instances, suffer the fact that it means shells and explosions are immune to change (more or less) and see what that's like. If it then seems like a requirement to explore 'prefab based' parameters, then go there at that point.
- A manager selects an instance of a game object (light, camera, 3D object, particle system, canvas)
- This will involve thought about drilling down to child objects too they need to be selectable, but maybe you start at the top level and then possibly select a child?
- It tells that object to lerp a parameter
- Ideally, then, you would dynamically select from the current set of instances in the scene, rather than maintaining an array since certain objects are added and subtracted over time
- You need to handle the case where the object you're working with is destroyed (shells, tanks)
- Need to indicate selection with GL
- Need custom lerpers attached to every object in the scene and anything instantiable
Question: If you select a 3D object, what are the lerpable elements? Transform of course. Mesh renderer? Rigidbody?
Thing: The things you actually lerp are parameters on Components, so is the real thing about what Components these things represent? Light, Camera, Mesh Renderer, Transform, Rigidbody, Audio Source, Particle System, Text, Slider, ... is it plausible to just grab all those things? How expensive is that to do once per lerp?
2018-02-17 17:17, in which I report in on solid progress according to the new mission
Well my plan to just make the fucking thing is going well so far. Seems to be inexpensive (enough) to capture all components once per lerp. I've implemented three of them (Light, Camera, AudioSource) without too much difficulty, still using the reflection-based code from before but now
- Lerping is cumulative (doesn't bounce back)
- One thing at a time
- Chooses based on component type for now (equal chance of selecting any type and then chooses a random object with that component on it)
Have the bounding box drawing properly - oddly satisfying when it lerps the camera to perspective and you can see its bounding box (normally you can't see it of course).
Definitely running into the obvious issue that certain lerpable value are kind of a disaster in terms of rendering the game utterly unplayable - e.g. if the near clip plane goes too high you just can't see anything. One option is just to accept that, but this question of acceptable ranges is just a thing to think about really.
One option I thought about just before is that rather than lerping to a specific value one could just lerp for a specific duration, or by a specific amount. Still cumulative, so you could end up lerping "too far" and ruining the game, but at least it would be "fair" and not using artificial ranges... it would be taking things from their base state and altering them over time. So there would be change, still, but not necessarily and instant lerping to an unplayable version of the game. Would still have to define (somehow) an amount that you can lerp (negative or positive), but that's fine... could define a min and max amount and then choose a value in there (e.g. -3 to +3), and then target a lerp with that value added.
I'm pretty convinced by that for now, that it will at least avoid sudden leaps into ridiculousness.
Transform is a big one to add in soon, but so far so smooth actually. (Need to remember to remove the transform change from the directional light, speaking of.)
Fuck knows what this leads to, but it definitely feels like it's on more solid ground and will be illustrative of... something. And because it's one by one it feels fairly possible to actually have UI listing the current change taking place which is good too.
(The other way to select: grab all game objects, choose one, and then choose one of its lerpable components - is also good, but seems like it will enormously favour the '3d object' lerping around transforms and maybe mesh renderers? They're the most common components that we'll be affecting... will have to see how it feels.)
So, current mood = optimism.
At some point I need to write something about the nature of seeing my CART 415 students' inspiring work around more expressive prototypes and how that's kind of sucking my will to tackle that angle on this larger project. Interesting experience.
2018-02-18 11:22, in which I reflect on adding Transform and progress in general
I added a Transform lerper today. Two things come out of that
It looks amazing when it happens. Although I didn't see it as often as I'd have liked, the couple of instances where some mesh got selected and transformed relative to its parent looked pretty amazing - like a piece of the tank suddenly levitating into the sky, or a tank rotating so it's face down in the dirt.
It doesn't come as much as I'd have thought OR it gets quickly obscured. There's a kind of power hierarchy of effects. Some things affect everything on the screen, like the camera, whereas some might be very subtle (like a single rotation of a single mesh). It means that if the viewport resizes, for instance, all bets are off. The drastic impacts are pretty satisfying, but I suspect they're too impactful too quickly, and don't leave enough time to sit with the accumulating effects on the game.
As such I imagine that, as above, it seems like the right thing to do is have a restricted cumulative effect for each format - a particular distance it's allowed to move in one lerp, and then I should be able to tweak that so it's less completely insane so quickly.
I was thinking last night it might be possible to make a 'rigorous'ish approach to this if I determine it in terms of percentages of some drastic effect? Like the viewport can change by 10% each time, say? A position can change by up to one unit?
Should I restrict position/scale/rotation to only affect a single axis at a time for something more measured? Probably not necessary? But certainly the amount needs to be restricted.
Remaining elements to target are:
- UI elements (that's just text, canvas, and slider in this game I believe)
- Particle Systems (this seems fucking huge)
- Mesh filters (fun because you can change the actual mesh)
- Mesh renderers (includes materials)
- Materials? (Can you edit them live? Should you? Do they fuck the underlying game?)
- Box colliders (boring because invisible, but still okay?)
So that's a few more to go yet, but they seem kind of doable.
The other big thing is switching to a system of specifically smaller-scale cumulative effects, rather than a random value in a (large) range that you specifically target. Rather: what it is now plus a smaller random amount. Think that will play much nicer.
2018-02-19 12:50, in which I report satisfaction with progress and don't say anything very interesting about it
Added the MeshFilter which was satisfyingly weird. Things popping into different forms of existence while you're watching. Also some tweaks toward the more cumulative model I've been thinking about using... it's not perfect, but started to feel more like controlled chaos?
One observation: things can get really fucked fast which is kind of entertaining. Like if the ground plane tilts, you pretty much lose most of the level. There are a lot of things that just instantly destroy the gameplay, which I think is fine. Should always be possible ot restart.
Makes me wonder whether it should be possible to choose which of the types of things you're going to accept changes to? But there's a kind of magic to all at once? I don't like the idea of the player having agency?
It's pretty intense though. I guess some of that will just be fixed by tweaking the number? Makes me wonder if I should be making some kind of UI for adjusting those numbers actually? Might be nice to be able to set them up in the editor rather than here? I dunno.
Anyway, progress being made. It's definitely insane in a good way. It'll be worth releasing. It totally indicates the arbitrariness and precariousness of a game...
2018-02-19 15:12, in which I check in with more progress.
Since the last entry I've added Rigidbody and MeshRenderer to the lerpers. They both work well. The constraints stuff that Rigidbody does is super weird, as was strange gravity stuff. At one point I was driving my tank in space, which was pleasant, and sometimes they suddenly refuse to move in a specific axis. The MeshRenderer one is nice because things suddenly change their appearance in a satisfying way.
- UI (Canvas, Text, Slider)
- Particle System
- Box Collider
With a vague question about actually editing the underlying properties of materials as opposed to just setting them? But unclear that's really necessary? I think we get plenty of impact without that and you do have to stop somewhere? Also would require a weird kind of sublerping since a material is a single property of a component... could argue that I'm only about setting those properties, not about changing the underlying objects inside them?
Anyway, three to go and then we'll have something that's kind of real?
I suspect I use separate lerpers for Canvas, Text, Slider. And then there's the Rect Transform for UI elements which I should probably look at?
ParticleSystem is made up of a series of modules. Following the logic of material above, I suppose I generate a randomized particle system as a destination? And replace it with a flip rather than a lerp? Each one is then randomized. This is weird, too, because it's only going to apply so very briefly (it'll basically never happen for explosions, firing). Only real recipient of this will be the dust trails?
Do I have any concern about the idea that some of these things are lerping but rather flicking from one state to another? You can't lerp materials of course, so it doesn't make sense, but a particle system does have underlying parameters controlled with floats, so should they be lerping? But if so... holy fuck that'd be time consuming? Maybe possible to do it with lerpers for every module? But again... jesus that's a lot of work...
2018-02-20 23:29, in which I just note that I'm close to feature complete and more or less happy
Okay I've implemented a few more. The UI ones are almost all done. Leaving:
- Box collider
- Particle System
Canvas won't take much, nor box collider. Particle System seems like it miiiight be an epic nightmare? But perhaps it won't?
Have some concerns about the aesthetics of selection... particularly the way the camera zooms in on the tanks meaning you don't see the broader level and thus don't see the changes being wrought so immediately. But I can't really help that I suppose. It's funny, that's an observation that came up in class - the deemphasis of the nature of the world by the camera and initial positioning. Perhaps I should position them further apart for that reason? But that seems a lot like cheating the underlying system.
Anyway it's close to being feature complete. At which point I can think about how random is random and specific ranges for different values (it's pretty arbitrary right now).
I do get the feeling it works quite well as a system for revealing 'behind the curtain' and making the actual materials highly visible. In ways that often break the game itself, but that in itself is a useful thing to be able to see.
2018-02-23 13:52, in which I think about the undead authorial presence in the game
Now put in Colliders (Capsule and Box) and tried but decided for forego Canvas. Therefore the last remaining lerper is Particle System, which is a big one in terms of sheer volume of numbers per module, so we'll have to see how this one goes, but it shoooouuuuuuld still be possible? Just to create a randomly configured module and let it rip??
On the undead authorial presence
One thing I'm led to think about as I work through all these millions of lerpers is the question of my authorial intent versus pure chance in terms of how they're handled. I can see myself making decisions (even if they're temporary?) in a number of ways
- At an extreme level, choosing whether to even include a component. In this most recent session I decided not to include the canvas as a component because it is almost guaranteed to just make the text UI invisible and that doesn't seem like an interesting thing to experience? But I'm then imposing an idea of "interesting".
- Lower down I'm not always implementing every single possible parameter on a component (though I'm pretty close to implementing every single parameter available in the editor to be fair), and the rational for omitting a parameter is generally on the basis of it making no apparent different to the player's experience. e.g. turning on and of HDR or something doesn't actually create a perceivable difference, even if it really is a setting.
- And along with this I am obviously dictating ranges that values should move within for many of the parameters. That restricts specific things from happening, however. I'm trying to rectify this with a move toward lerping values based on an AMOUNT of change (in positive or negative directions), but that creates certain other difficulties around values that maybe can't be negative, for instance, or need to exist within a specific range. Could be I could add an option specification of range I suppose and clamp things?
Anyway, just like John Cage I suppose I'm not really able to extract my own decision making from this thing, no matter how many Random.value expressions I write into it. It still reflect tweaks by me to make it more 'interesting' to experience.
Even when I play it now I worry about things like the fact the camera behaviour naturally obscures a lot of the change if the tanks pursue each other. There's nothing I can do about that except hope a player notices "something is up" and eventually tries exploring the landscape a bit rather than just attacking each other. I think that's a reasonable hope though, and I can't really change anything on that front without really betraying things.
It's come a long way though and does feel like a genuinely interesting awareness of the materials. Not necessarily conversational as a designer, but potentially allows the player to have the conversation? Or at least see what their world is made of and how potentially fragile gameplay is in relation to engine settings...
So I'm still happy. This afternoon I'll tackle at least one Particle System Module and get a feeling for that.
2018-02-23 15:56, in which I reached a 'feature complete' version of the main game
Having done battle with a couple of ParticleSystem modules this afternoon I feel pretty confident decreeing that I'm just not going to tangle with them. They are very deep and complicated systems that it just doesn't make a lot of sense to fuck around with in terms of time and results. One could do so, but they involve so many specialised systems that it's just nightmarish... like a whole extra project just to do ParticleSystem randomization. So, no thanks for this one. Kind of sad obviously since it's a very clear limitation of my original idea of tackling the full range of the game's representation of Unity ontology, but sanity is important too.
With that decision made (and obviously I may go back on it, but I don't think so - I need to be wise about how I use my time), the game now has code to change every kind of thing in Tanks!, which is pretty neat.
When I play the game it's definitely strange and messed up. Particularly enjoy moments where the tanks break free of gravity and can drive into the sky, for instance.
It occurs to me, even that this is almost a game design generation system. It creeps around the parameter space of this pre-existing (generic) game and finds versions of it based on new parameters to the engine. It effective "invented" the idea of "prop hunt" for example, on its own, when it swaps player meshes. It's obviously incredibly clumsy and blind, but actually it's pretty intriguing from that perspective - not just about madness and making a game/engine seamful, but also about finding things in the positive sense. Liberating us from standard interpretation of the parameter space.
What now, kid?
Well I want to show it to my class for feedback. I think it's fits pretty well into the 3D objects class - I can show the real thing, but also show a version that only parameterises elements directly tied to 3D objects to present it as an exploration of that particular subsapce. That will help me make any "last" decisions.
It's clearly called Tanks!? at this point, which I'm pretty happy with. It conveys the WTFness of the game, also the idea of ? as a wildcard operator is good too.
There's a question of menu system/intro screen, but I kind of want to just minimise all of that - it's not in the original game for one thing. Maybe just a pause right at the start before the game starts where the title bit can say TANKS!? first, and then the game starts would be appealing. There's the question of telling the player the controls. I guess that can be another text on the title screen, maybe it can even wait for player to hit fire to start, but the lerper will already be underway which will be pretty fun. That seems fair and not too much of a stretch of the original game.
And there's the biggest question of revisiting the actual input I've had into how the system selects parameters. I think the most obvious thing is to pursue the 'additive' version of the thing, and to look out for special cases where that could break something. LerpIntNonAdditive or something maybe to account for that? Yeah that seems fair. But the additive model seems better because it allows any value to be put in eventually, just at a specific interval. How I select those interval is still going to be a design decision, but it feels less restrictive of the actual possibility space than locking the game down to specific min/maxes over its lifetime (unless those exist in the engine itself).
So that's the next choice. I can think about that over the weekend. After that the game is, suddenly, kind of done. Fun!
2018-02-24 09:20, in which I think about 'final' steps
I mean, I know 'final' is just hugely ironic when you start talking about it with a game, but nonetheless it does feel like the game is suddenly really close to being done?
I've now got the basic 'polish' of having an application icon, a title screen with instructions, and thus a basic introduction into the gameplay rather than just falling in. The instructions don't look very nice, so that's something to maybe tweak a little - awkward because they're not even in the final game.
Other than some little aesthetic tweaks like that, the big thing today is probably to try to make the various parameters cumulative rather than range-based. Float already is, and Vector3 already is. Some don't need it (like Color).
Along with setting up the actual capacity, I then need to determine what 'reasonable' ranges of motion are per lerp. That's effectively impossible, but perhaps one could have a guiding principle (which I think has been secretly guiding me anyway):
Change should occur ideally such that it emphasises to the player the artificiality of the particular settings of any one moment. Thus change should, where possible, be visible and in motion.
That is, the point of this game is (or has become) to illustrate the 'inner nature' of Unity's GameObjects and their default components by changing them in real time during an actual game. The interactions between those changes and the experience of play throw the artificial nature of the world into relief. It's most effective if you see/hear the change happening, and ideally perceive it as a change from one thing to another related thing. So you'd want to avoid
- Things just popping out of existence (it's dramatic, but doesn't really emphasise the idea of change over time)
- Things making the entire game illegible, like a camera size that means you can't actually see anything (or at least you wouldn't want this to happen immediately)
As I said, this has kind of been guiding my work on the game anyway because, intuitively, I've tried to maintain approaches to the parameters that make them apparent to the player, rather than just random acts of sudden, complete change and destruction. I'm find with the entropic nature of the procedures - it's okay if the game becomes unplayable or unreadable, so long as it does so in a more or less intelligible way.
There are still things that break this rule that I can't help - notable invisible things like colliders which affect the game, but can't be perceived. C'est la vie. And there is a glaring omission in ParticleSystems which I'll just have to live with.
Overall, though, a cumulative approach, where each 'step' of change is within limits that seek to make sure the change is at a scale the player can perceive (again, where applicable), feels like the strongest approach.
And so I shall proceed. I'll get what I feel is a good version of that done today and then be able to show it to my class for feedback on Monday.
2018-02-24 12:56, in which I have a release candidate and a few niggles here and there
I've implemented all the additive versions of everything, including introducing a range to clamp to where it makes sense.
I ran a high-speed version of the game with 0.1-0.25 second max timing and it was madness in a very pleasant and aesthetic way. Game remained pretty playable, with the 'usual' floating tanks, misaligned UI and so on. A mess, but a nice mess.
A question remains concerning what it's like at slower rates of change. They're needed to make things like changing transforms fun to watch rather than teleportations/snapping. But then they 'waste time' on invisible parameters.
I guess I big question is whether I go back through and prune parameters that done really change anything, now that I've kind of done that implicitly for some stuff anyway. Like, the colliders? Although they're kind of good because they change the terrain itself in a strange way that's pleasingly off.
Surprised how powerful the rigid body constraints are, by the way. Really weird when you can only drive in a single 2D line across the screen.
Basically I think it's really to show to the class and get some feedback.
Not 100% happy with the title/startup process, but on the other hand it's pretty true to the original game, so not clear that I should mess with it. It's inelegant, that's all.
I suppose a weird possibility is to have a further title screen up front that gives the title and controls, and then cut to the game without a title in the traditional mode. That might be better actually now I think about it, quite like that.
Pippin Barr presents... Tanks!?... instructions... then cut to the game proper.
Though one nice thing about the current setup is that it's lerping away while you read the instructions which means there's some work up front which is pretty fun. A 'solution' could be to have the 'serious' title shit happening over the top of the game so it's kind of already fucked? Or at least start the fucking on a slowish fade-in or something? So you get an intimation of what's happening before the gameplay properly starts? Possibilities there perhaps...
2018-02-28 19:27, in which my release candidate 'works' in WebGL and I solved the title problem?
Spent time today trying to make the game work with WebGL. Overall it wasn't too problematic, but there were a couple of extremely random-feeling bugs that make no sense to me. Notably, setting isKinematic crashes the game, and so does changing the clip plane or flags on the camera. When I comment those out for WebGL it behaves itself nicely, so I think I'll be able to release a build like that too.
CART 415 feedback
2018-03-01 08:45 (This entry got interrupted deliciously by dinner. Lemon and asparagus fettuccine.)
I showed more or less exactly the build I have right now to CART 415. The session included not just my students but also visiting students from Vallekilde led by Thomas Vigild. As such I had an audience of 40 looking at the game and offering some feedback which was a pretty great situation from my perspective.
Now, naturally, because I usually in some sense "know" what the game is meant to be like, and have relatively serious underlying philosophical/conceptual concerns going on, some of the feedback was necessarily a bit off track - most notably adjustments to try to make the game more fun or accessible to players.
We did make some good progress in terms of the timing of the changes - they were a little slow in my version, while of course the accelerated version was too fast. In response to that while I've been playing with the settings over time lately I realised there was actually no real point in having a random range applied to the changes - part of the point is a kind of rigid progression of changes, so I'm making all changes take exactly one second, like the ticking of a clock. Faster than previously, while not so fast that you have no chance to understand what's happening.
Beyond that, feedback revolved around the question of resetting. For which there are three major options:
- Reset on round end (when a tank is victorious the )
In this version you go back to the original format when you restart the round. I'm not sure this fits with my ideas most notably because it puts the changes to the underlying game in service to the player experience - as someone said, this turns the game into a race to try to defeat your opponent before the world/game becomes so chaotic it's no longer possible. But that seems like a distraction to me - it makes the changes into an entertainment, a component of the gameplay, rather than a kind of external, mysterious force.
- Reset on some specific interaction
Similar to above, but even more gameplay-ish, is to have some sort of powerup or other way of making resetting the game part of how you play the game. I'm not a fan for the same reason. I did make me think, though, about a version of this same idea where the changes take place in a field around a tank... so that you're kind of warping the underlying engine by driving around... that would be kind of interesting, if it were localised like that? Not for this game of course, but as an idea...
- No resetting ever (the current situation and my favoured idea)
Well that's what I already have. Philosophically I think it makes the most 'sense' in that the idea is that the game remains exaclty the same, except that there's this external element gradually pushing it toward entropy. If people want to have a more 'game-y' experience, they can of course just restart the entire game. I've made the movement from startup to play relatively brief (ignoring the Unity splash it's maybe 5 seconds or so?), so I don't think I'm eliminating the game-y version.
I ended up getting a WebGL build working pretty well. For reasons I absolutely don't get, you can't set isKinematic on a Rigidbody and you can't change the clip plane of the camera (near or far) and you can't change the camera's "clear flags" setting. The camera stuff moderately makes more sense to me I suppose, but the Rigidbody thing really doesn't. C'est vie. Oh and also despite the fact it appears to let you change the viewport of the camera, I've never seen it do anything. I guess the WebGL camera's just pretty radically different. Fine, fine.
The project is, I think, pretty much done. I need to fix the font in the WebGL build because it's not successfully using Arial. I think I need to include the actual font itself probably for it to work (I'm guessing the Arial it uses is just a representation of the system font). Fine.
Press kit is half-done. Good, good.
Onward and upward with the Tank Arts.