Skip to content
This repository has been archived by the owner on Dec 24, 2023. It is now read-only.

Consolidating all the differing parts of OSHI into a better model #11

Open
YoshiEnVerde opened this issue May 16, 2017 · 25 comments
Open
Assignees

Comments

@YoshiEnVerde
Copy link
Collaborator

I've been thinking on this strongly for a while.
Between issues like #310, and the kind of issues that keep cropping up (specially when asking for new features), I can't help but think that OSHI's model is starting to become a bit of a Frankenstein's monster...

Right now, it's not strange to have somebody ask for a feature only available in a single platform, and OSHI ending up with an extra feature on that platform alone.
In direct opposition, however, the current model will abstract or rename things in some platforms to have them fall in line with another platform.

As far as I can see, we have a few model/design details that need to be addressed:

1. We need a consolidated model across all platforms
2. We need to keep the API as far away from the non-consolidated parts as possible
3. We need a better way of recovering and updating the system info

@YoshiEnVerde
Copy link
Collaborator Author

The main idea/model I have in mind would be something like:

  1. A 3-tier architecture/model: a transparent API, an internal set of objects recovering/caching/updating the needed data, and a set of "drivers" that would recover said data from the system
    1-1. The API would be something as transparent as our current API is supposed to be: A tree starting from Computer, down to a Hardware object and a Software one, then moving further down by hardware and software components (like CPU/Processor/etc, MOBO, Drives, etc)
    1-2. The drivers would consist of one object for each recovery method (like one WMIC driver for Windows, one driver for each Nix command, etc). These would recover their data on every call, no cache involved
    1-3. The fetching layer would have objects capable of receiving update requests from the API layer, and recovering the needed data from the drivers, caching them when wise to do so.

This would solve part of the first point, and the whole second and third points, of my first post.

  1. For a consolidated approach to the API we need to separate features very concisely, between shared and unique features.
    2-1. Unique features should be part of a generalized set of API interfaces, available no matter which system OSHI ran on. For features available almost everywhere, or important enough, we should just use UnsupportedOperationException on unsupported platforms.
    2-2. For features unique to a single (or two) platforms, we should provide an API extension, something like WindowsCentralProcessor as a specific child of CentralProcessor, that would expose those features. Then a user, knowing the platform they're on, could do a downcast between them and access them (under the caveat of knowing an incorrect cast will fail)

This would solve the remaining issues from point 1

  1. As an extra point I missed on the original post, doing things this way would allow us to write a good set of JavaDocs for OSHI, since all the important info for each method/feature would be in the transparent APIs, with documented UsupportedOperationExceptions whenever a feature was not universally supported, and the ability to add little notes on each interface about peculiarities of each platform implementation.
    This would diminish the amount of times the same issue is added for the Nth time to the tracker, just because it was explained and closed many months ago.
    It would also make it easier to update and maintain the code, as any new feature would start as an add-on to the corresponding platform specific API object, and only be promoted to the general APIs if a way of making it work on the majority of platforms were added later.

@dbwiddis
Copy link
Member

This all sounds great in theory. Would you like to lead the redesign? :)

@YoshiEnVerde
Copy link
Collaborator Author

Give me a couple weeks to free up some space on my schedule, and I'll do it gladly ;)

@YoshiEnVerde YoshiEnVerde self-assigned this May 18, 2017
@dbwiddis
Copy link
Member

The one thing I'm iffy about in your plan is the use of an Exception for unsupported features. I'd much rather return either sensible defaults (0, empty lists) or even "nonsense defaults" (-1, etc.) If we do use excptions, they should be used sparingly, and I'd like to have a custom OSHI exception type (or types) like UnsupportedPlatform or UnsupportedPermissions, etc.

@YoshiEnVerde
Copy link
Collaborator Author

YoshiEnVerde commented May 18, 2017

The idea for using an exception was, mainly, for very rare features that might be available in 9 out of 10 platforms, but might be missing on one or two.
It was never even contemplated for most of them...

Even so, I would also prefer a standardized result for failed fetches.

One way would be to implement something akin to the Optional object in Java.
Something like OshiResult<T>
With methods:
boolean wasSuccesful(); returning if the fetch failed or not
and
T getResult(); returning the value if it didn't fail (or null as default).

This way, we remove all the problems we have with fetches where empty strings, or null values were acceptable results colliding with fetches where such values were the failed result

@YoshiEnVerde
Copy link
Collaborator Author

And then add some custom wrapper exceptions, to catch any failure within the code, and return it as part of the OshiResult, maybe with a method:
Throwable getFailureCause();
Returning the wrapper exception, or null if it didn't fail.

That way, the correct usage of OSHI would be standardized to:

  1. Initialize the SystemInfo object
  2. Go to your desired System Part
  3. Ask for the desired value, and get an OshiResult object as result.
  4. Ask if the fetch failed, then recover the value or cause accordingly

This would keep any forgotten exception from breaking code, and make it easier to actually support multiple platforms with the exact same OSHI code

@dbwiddis
Copy link
Member

I really like the OshiResult idea to avoid using exceptions, but "result" is a one-way street. Why not just have an OshiObject that holds results (including nested OshiObjects)?

These objects would have a boolean update() method which would return false if any of its methods failed; with details of the failure in another attribute (a list of Strings containing the failed attributes). If it returned true then all its values could be reliably fetched.

(Alternate idea: have two String collections, getSuccessful() and getUnsuccessful() which store the appropriate results of each update; if you just want the successful results you iterate that.
Other similar idea, getAttributes() lists all the possible fetches, you could set-differential and remove any failed values from that collection when processing.)

This would also be a centralized format/structure to return JSON or other formatted/serialized objects, enable JMX, Metrics, etc.

@YoshiEnVerde
Copy link
Collaborator Author

YoshiEnVerde commented May 18, 2017

It all goes back to the 3-Tier Design I was talking about in the OP.
Anything that is part of the OSHI Architecture, would not be the result of a data fetch, but part of the API.

I'll use the NetworkInterface object as an example:
We have a SystemInfo object, that has a method HardwareInfo getHardware()
This method returns a HardwareInfo object, that has a method NetworkInterfaces getNetworkInterfaces()
This last object is a wrapper for a List<NetworkInterface> with a simple getter for the list, or for single objects within

All these objects are created between Tiers 1 (the public API) and 2 (the internal handling of data structures).
When you ask a specific NetworkInterface object for its data, it returns OshiResult<T> objects populated from information gathered by Tier 3 (the data/platform drivers).

For the update() methods, that would be a simple interface Updateable, that most objects in Tiers 1 and 2 implement.

This way, you can ask the whole SystemInfo to update, only the HardwareInfo block, maybe just the NetworkInterfaces list, maybe (when possible) even one specific NetworkInterface

@YoshiEnVerde
Copy link
Collaborator Author

YoshiEnVerde commented May 18, 2017

On the implementation side, all interfaces and general abstract implementations would be part of Tier 1; while all the platform specific implementations and general fully implemented classes would be part of Tier 2.

Then, Tier 3 would consist of one class/object for each fetching method we have (as in, one for WMIC on Windows, another for Registry Fetching on Windows, another for each command invoked, etc).

The main idea is that, outside of explicit calls to the update() method on Tier 1 classes, Tier 3 drivers would know if they are update-able or not (some calls will be fixed the first time, some might be cache-able, some might just fetch on every call).

Finally, for features that are platform specific, to avoid the unsupported exceptions or methods that always return failures, Tier 1 would have a platform specific sub-set of all the interfaces, so that the user could explicitly ask for the current platform, then cast the objects to the corresponding platform specific version to get access to those.

Of course, doing so would come with all the documented warnings stating abnormal operations if the objects are miss-cast

@dbwiddis
Copy link
Member

It sounds like you have a good handle on this, so I'll don my "weeks of coding can save you hours of planning" T shirt and sit back and watch. :)

@YoshiEnVerde
Copy link
Collaborator Author

LOL. Once I have the free time to do this (I'll probably start this weekend), I'll build an empty skeleton frame for the idea on a branch, and link it here.
That way it'll be easier to point to details and see what to fix or make better.

@YoshiEnVerde
Copy link
Collaborator Author

My main set of objectives with this is to:

  1. Make OSHI as robust and reliable as possible, by removing contradicting results, streamlining the usage, and explicitly documenting any functionality that might not work on a specific platform
  2. Simplify the addition of future features, by standardizing the process of first adding platform specific methods in non-standard API objects (only reachable by explicit down-casting), then moving them up to the general API if/when a majority of the supported platforms have it implemented (thus, allowing for a lot more feature adding without breaking the general API)
  3. Simplify the data fetching maintenance, by keeping each fetching method (or requirement) separate from each other, but each similar one together. Thus, any update (or addition) on, for example, what JNA requires for a WMIC call will be easily update-able on a single class (instead of spread across many objects)

This would also make it easier to implement more platforms into OSHI, since any new platform could be implemented full of failed results, then populated driver by driver, until as many features are available.

@dbwiddis
Copy link
Member

Let me tack on another objective.
4. Be fast and lightweight; only retrieve and store requested information to minimize CPU and memory footprint.

@YoshiEnVerde
Copy link
Collaborator Author

I hear and obey ;)

If I can build this correctly, it shouldn't be a big problem, since I could have each object fetch whatever value is a must on demand, instead of doing so on SystemInfo initialization.

Continuing with the previous example, the NetworkInterfaces object would not be populated until the HardwareInfo.getNetworkInterfaces() method is called, then any internals of each NetworkInterface that wasn't populated as part of the list creation would only be fetched on any call to their corresponding getter method.

@YoshiEnVerde
Copy link
Collaborator Author

I've been swamped by work, and I only just realized today that it's been over 3 months since my tentative deadline for the mock up.

I'll finish the mock up in a week or so.

@dblock
Copy link

dblock commented Aug 31, 2017

Or maybe you want to PR a very small change that makes one tiny step into the direction you're describing @YoshiEnVerde?

@YoshiEnVerde
Copy link
Collaborator Author

The time frame is more about my finding time in a busy schedule to implement anything at all, more than about complexity or size ;)

The main problem is that this issue is about a major restructuring/refactoring of the code for next version, and there's no way to add the important parts to the existing code base.

The current status for this is around "On the design board" right now, and the mock up would be for tweaking the overall design before implementing anything. It'll mainly be critical interfaces and some mock implementations to give a general idea.

After that, I'll receive any thoughts, comments, ideas, and suggestions that could improve it for a few weeks, and then start on the heavy duty stuff

@ejaszewski
Copy link

@YoshiEnVerde Seems like this issue is directly related to the changes mentioned in #306, which I am going to (finally) get to again today. I don't mind making the battery API a sort of mock-up for this, since I have to re-implement most of it anyways.

@YoshiEnVerde
Copy link
Collaborator Author

YoshiEnVerde commented Aug 31, 2017

@ejaszewski That sounds great!
I missed how much that issue had evolved.

I'll try to have that fork up sooner then, just so you can see the general design for this and work accordingly.

@ejaszewski
Copy link

@YoshiEnVerde Took a quick look at some of the other issues, and #400 may also be a good candidate for including in a 4.0 test release.

@YoshiEnVerde
Copy link
Collaborator Author

@ejaszewski Yeah, I saw that one. I actually just added some possible Windows solutions there

@ejaszewski
Copy link

@YoshiEnVerde any progress on that fork?

@spyhunter99
Copy link

regarding the duplication of model code with json annotated objects versions. Is the json annotated versions really needed? Jackson has the object mapper which works reasonable well for serializing and deserializing and most of the time, jackson specific annotations are not required

@dbwiddis
Copy link
Member

The ideal end state is a serialized object which can be easily mapped to JSON or XML or any other serialization of the user's choice.

My initial choice when implementing JSON was to use an alleged Java standard (javax.json) in preference to a third party extension. However, Jackson is so widely used (and flexible), and brings with it so much more power (including easily enabling JMX, integration with Metrics, etc.) that I think that's probably the way to go.

@dbwiddis
Copy link
Member

To clarify my previous comment: the end state (version 4.0?) should have oshi-core containing serializable (nearly POJO) objects with the attributes we care about; there can/should be an oshi-jackson parallel project which provides utility methods for producing json, xml, or other formats (but is not a requirement to use the core functionality).

@dbwiddis dbwiddis transferred this issue from oshi/oshi Jan 12, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants