Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tesla: allow customizing the command proxy to use #14616

Merged
merged 26 commits into from
Jun 30, 2024

Conversation

FraBoCH
Copy link
Contributor

@FraBoCH FraBoCH commented Jun 28, 2024

Add support of custom http proxy like https://github.com/wimaha/TeslaBleHttpProxy for sending commands to the standard tesla vehicle template as discussed in #14252.

The corresponding yaml configuration would be:

vehicles:
- name: tesla
  type: template
  template: tesla
  title: Y not?
  accessToken: XXXX
  refreshToken: XXX
  vin: XXXXX
  capacity: 74
  commandProxy: http://192.168.X.XX:8080

@FraBoCH FraBoCH changed the title Proposition to integrate a BLE HTTP proxy into the standard tesla template Integrate a BLE HTTP proxy into the standard tesla template Jun 28, 2024
@andig andig added the enhancement New feature or request label Jun 29, 2024
@FraBoCH FraBoCH marked this pull request as ready for review June 29, 2024 08:57
templates/definition/vehicle/tesla.yaml Outdated Show resolved Hide resolved
templates/definition/vehicle/tesla.yaml Outdated Show resolved Hide resolved
vehicle/tesla.go Outdated Show resolved Hide resolved
@wimaha
Copy link
Sponsor Contributor

wimaha commented Jun 29, 2024

I didn't see, that there is another pull request like mine (#14620).

But I really think you have to remove the polling of the tesla server each time the current is changed. Otherwise you will hit the API Limit ...

@FraBoCH
Copy link
Contributor Author

FraBoCH commented Jun 29, 2024

I didn't see, that there is another pull request like mine (#14620).

But I really think you have to remove the polling of the tesla server each time the current is changed. Otherwise you will hit the API Limit ...

We indeed need to verify that. My understanding is that the limit of « read » requests is higher and sufficient for our use case.

@andig
Copy link
Member

andig commented Jun 29, 2024

According to docs there's an API limit, not a write limit.

@wimaha
Copy link
Sponsor Contributor

wimaha commented Jun 29, 2024

I didn't see, that there is another pull request like mine (#14620).
But I really think you have to remove the polling of the tesla server each time the current is changed. Otherwise you will hit the API Limit ...

We indeed need to verify that. My understanding is that the limit of « read » requests is higher and sufficient for our use case.

The Limit of Read-Requests ist 200. So it‘s up to the weather, of thats enough or not.

@andig andig marked this pull request as draft June 29, 2024 11:34
@FraBoCH FraBoCH changed the title Integrate a BLE HTTP proxy into the standard tesla template Tesla: allow customizing the command proxy to use Jun 29, 2024
@FraBoCH
Copy link
Contributor Author

FraBoCH commented Jun 29, 2024

<> > > I didn't see, that there is another pull request like mine (#14620).

But I really think you have to remove the polling of the tesla server each time the current is changed. Otherwise you will hit the API Limit ...

We indeed need to verify that. My understanding is that the limit of « read » requests is higher and sufficient for our use case.

The Limit of Read-Requests ist 200. So it‘s up to the weather, of thats enough or not.

I see what you mean. On the other hand, this data is cached (and not fetched above 6A) so I wonder how many real API call this trigger in a one day. IMHO we should keep the existing logic to avoid issue #13007 coming back, and think of a further fix if we reach the API limits in practice and if that prevent evcc from doing its job properly.

@wimaha
Copy link
Sponsor Contributor

wimaha commented Jun 29, 2024

<> > > I didn't see, that there is another pull request like mine (#14620).

But I really think you have to remove the polling of the tesla server each time the current is changed. Otherwise you will hit the API Limit ...

We indeed need to verify that. My understanding is that the limit of « read » requests is higher and sufficient for our use case.

The Limit of Read-Requests ist 200. So it‘s up to the weather, of thats enough or not.

I see what you mean. On the other hand, this data is cached (and not fetched above 6A) so I wonder how many real API call this trigger in a one day. IMHO we should keep the existing logic to avoid issue #13007 coming back, and think of a further fix if we reach the API limits in practice and if that prevent evcc from doing its job properly.

The Cache is resetted every time current is changing. So in my use case the limit will be reached very fast. 😕

@FraBoCH
Copy link
Contributor Author

FraBoCH commented Jun 29, 2024

Adding a parameter to disable « cleanly » the current check loop. So that we can let the (advanced) user decide.

@FraBoCH FraBoCH requested a review from andig June 30, 2024 06:30
Copy link
Contributor Author

@FraBoCH FraBoCH left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@andig what about updating the :tesla docker image with this PR for testing ?

@andig
Copy link
Member

andig commented Jun 30, 2024

I would suggest that we separate using a different proxy from limiting API requests. The latter could (maybe) be done using without additional config. I'll try to file a PR for that.

Could you limit this one to proxy only?

@andig
Copy link
Member

andig commented Jun 30, 2024

what about updating the :tesla docker image with this PR for testing ?

done

@andig andig merged commit 82add3a into evcc-io:master Jun 30, 2024
6 checks passed
@andig
Copy link
Member

andig commented Jun 30, 2024

Regarding the api.CurrentGetter lets see if this doesn't already work. It will only trigger a read <6A and it will only do so due to caching if the current below 6A was changed. The last improvement (though it would need to be tested) would be to only update when the current goes below 6A for the first time, assuming following requests will all work immediately.

@Raudi1
Copy link

Raudi1 commented Jun 30, 2024

Hopefully getting vehicle info should be possible over BLE as well. teslamotors/vehicle-command#229

@andig
Copy link
Member

andig commented Jun 30, 2024

Maybe, but it should not be necessary. The API limits should be fine once you start sending the commands via BLE.

@Raudi1
Copy link

Raudi1 commented Jun 30, 2024

Maybe, but it should not be necessary. The API limits should be fine once you start sending the commands via BLE.

Yes, but while it isn't an issue for me, there's people without cellular reception at their charging spots. I think I read somewhere Wifi only doesn't work for the API. Could be wrong though.
But it's always a good thing to have local access without any cloud stuff, as this whole debacle with API limits shows alongside many examples of other cloud services changing, not working, or shutting down completely.

@wimaha
Copy link
Sponsor Contributor

wimaha commented Jul 1, 2024

Regarding the api.CurrentGetter lets see if this doesn't already work. It will only trigger a read <6A and it will only do so due to caching if the current below 6A was changed. The last improvement (though it would need to be tested) would be to only update when the current goes below 6A for the first time, assuming following requests will all work immediately.

As described above this is an issue for my use case. I always charge with < 6A and when there is changing weather the API Limit is reached. Maybe we can try your proposed solution. Perhaps I can file a PR for that.

@andig
Copy link
Member

andig commented Jul 1, 2024

Could you check the logs why that happens? We're caching the result and should only call the API if current is updated. That is 2, 3, 4, 5 Amps. Do we even need to call it if the change remains below 6A or only when going from 6A to below? Needs to be tested.

@FraBoCH
Copy link
Contributor Author

FraBoCH commented Jul 1, 2024 via email

@andig
Copy link
Member

andig commented Jul 1, 2024

Actually, we've just added this in #14622. Since the TWC has phase currents, this should work if you remove the api.CurrentGetter from the vehicle. Could you give it a try?

@FraBoCH
Copy link
Contributor Author

FraBoCH commented Jul 1, 2024

Mmmmh. Any change needed or is it suppose to already work with twc in latest nightly ?
I am testing with a custom vehicle getting data from TeslaMate (without GetMaxCurrent obviously), but it doesn’t detect the phase mismatch when switching from 13A to 1A. The Tesla gets stuck at 5A instead of 1A in that case.

@andig
Copy link
Member

andig commented Jul 1, 2024

Ahhh, I see. Need to check the error return code, too. Change coming.

@andig
Copy link
Member

andig commented Jul 1, 2024

@FraBoCH see #14659. It is a bit subtle but should work. Remaining drawback is that we'll only recognize if current is too high by at least 1A, so it might not detect not switching to 5A, but should detect on 4A due to https://github.com/evcc-io/evcc/pull/14622/files#diff-c456526d4bd9022a488ce32de208b40d86016e2c40b0c81a2dffe557c186377dR723. Seems this is the best we can currently do. Ymmv.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
devices Specific device support enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants