plat: apl: clock gating adjusted for dai, dma, and cores#452
plat: apl: clock gating adjusted for dai, dma, and cores#452lgirdwood merged 2 commits intothesofproject:masterfrom
Conversation
lgirdwood
left a comment
There was a problem hiding this comment.
Just some question on why probe() would be called twice or more on a device. The driver probe can be called more than once (and would create a new device every time it was called) but I would expect probe() would only be called once per device. Unless I'm missing something ?
There was a problem hiding this comment.
We should probably pass index in and combine with clock ID, e.g. CPU_CLOCK_CORE0, etc ?
There was a problem hiding this comment.
What's the use case here ? I guess topology could create the device twice in error ?
There was a problem hiding this comment.
(also applies to the multiple probe question) my assumptions:
- some devices may be created on init, some on demand, the client does not need to know whether the probe() must be called before get() or not, otherwise may try to probe the device that is already created.
- there is no api to 'return' the device back to the pool (calling remove() implicitly inside for some of them) yet, so get-probe sequences may be called again by a destroyed and then re-created topology component.
There was a problem hiding this comment.
Actually the get() could check whether the device exists before calling probe() and the get_drvdata() test could stay there but result in error (probe had not been protected against double allocation before I guess).
There was a problem hiding this comment.
ok thanks, I see - I think the simplest approach for us would be to only probe() devices when added by topology and remove() them when topology is unloaded. This probably can be summarized with some rules (that we could also document).
- we dont automatically probe() any pipeline device unless its's added by topology.
- we only remove component device when it's unloaded by topology.
- component added twice would return an error.
- active component removal would return an error.
- pipeline trigger start will fail if pipeline has no valid sink and source endpoints.
The driver would manage the component load/unload and ref count components in pipelines that shared components. @ranj063 is working on driver code for this shortly (PM related)
The thing we have to watch for is non pipeline IO drivers like low level DMACs, SSP, these would need reference counted by host.c and dai.c respectively.
There was a problem hiding this comment.
@lgirdwood Agree. [Edit: missed last paragraph, removed re-phrased, similar conclusion].
However reference counting in host.c and dai.c might not be sufficient if you add another client and there would be a need for a more "global" reference counting. How about doing this inside specific device driver (dw-dma.c for example)? This approach would require each client (host, dai) to "put" the device, but I planned to add dai_put(), dma_put() calls anyway. Atm there is dma_get-channel_get-channel_put sequence implemented and I planned to balance them inside the _free() ops.
There was a problem hiding this comment.
@lgirdwood Please take a look at the latest implementation that handles ref counting (simple one, protected by early initialized locks) in the lib layer. Done for the get() side, to be completed on the put() side if this is acceptable direction.
There was a problem hiding this comment.
I like the locking update, it's a needed simplification, but still not sure about calling probe() twice as it diverges from the Linux model. It could also be due to a recent? change to the probe() API that
static int hda_dma_probe(struct dma *dma)
returns an int instead of returning a "struct device *". If we returned a struct dma device (which can contain a generic base structure) the we can managed devices at a global level.
c137fd6 to
c4d6533
Compare
lgirdwood
left a comment
There was a problem hiding this comment.
It looks like we need some infrastructure to create a global device and driver model.
struct device {
list_head list; /* in global list of devices */
struct driver *drv; /* probe(), remove(), pm() contained in a generic struct */
int type; /* device type */
int id; /* device ID */
};
The struct driver above would be contained in every driver structure (like SSP, DMIC, DMAC etc) and be something like
struct driver {
struct device* (*probe) (int id);
int (*remove) (struct device *dev)
/* other generic driver ops */
};
This would then align the device model to be very linux like and give us some simple device management and device level PM.
There was a problem hiding this comment.
fwiw, a minor improvement would be to
if (d->index != index)
continue;
This would mean you would not need to newline the trace calls further down as indentation will not be changed.
There was a problem hiding this comment.
Should we put some trace here as this could be caused by a bad topology ?
There was a problem hiding this comment.
I like the locking update, it's a needed simplification, but still not sure about calling probe() twice as it diverges from the Linux model. It could also be due to a recent? change to the probe() API that
static int hda_dma_probe(struct dma *dma)
returns an int instead of returning a "struct device *". If we returned a struct dma device (which can contain a generic base structure) the we can managed devices at a global level.
|
@lgirdwood With a simple ref count as implemented now, the probe() is called only once, pls note that probe() returns error code (which is rather a logic then run-time error) if the device already exists. Is this test confusing and should not be there? Regarding the probe() API, it stays unchanged since 2016 in the code. And I believe it is consistent with the Linux kernel APIs as well. The driver-device model and API separation sounds great. How about completing the put+remove path now and do the drv/dev refactor in the next iteration? |
Signed-off-by: Marcin Maka <marcin.maka@linux.intel.com>
dai clocks ungated and resources allocated on the first use. dma clocks ungated and resources allocated on the first use. cores clocks gated in idle. Signed-off-by: Marcin Maka <marcin.maka@linux.intel.com>
|
@mmaka1 your right, I must be thinking about the Linux driver probe() :) |
dai clocks ungated and resources allocated on the first use.
dma clocks ungated and resources allocated on the first use.
cores clocks gated in idle.
Signed-off-by: Marcin Maka marcin.maka@linux.intel.com