feat(aws): add support for Lambda Function URLs#284
Conversation
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 11 minutes and 54 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (11)
📝 WalkthroughWalkthroughAdds AWS Lambda Function URL support: new config options and enum, resource management with creation/deletion and caching including retry/backoff, a new FunctionURLTrigger (sync/async) with serialization, AWS integration points in trigger creation/deletion, trigger deserialization, example config entries, and end-to-end tests. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (2)
sebs/aws/config.py (1)
388-393:return Trueinsidetryblock; considerelse.Per the TRY300 hint, moving
return True(and the preceding cache-cleanup) to theelseclause of thetry/exceptimproves clarity and makes it unambiguous that the positive path only runs when no exception was raised.♻️ Proposed refactor
try: lambda_client.delete_function_url_config(FunctionName=function_name) self.logging.info(f"Deleted Function URL for {function_name}") try: lambda_client.remove_permission( FunctionName=function_name, StatementId="FunctionURLAllowPublicAccess", ) except lambda_client.exceptions.ResourceNotFoundException: pass if function_name in self._function_urls: del self._function_urls[function_name] - - return True except lambda_client.exceptions.ResourceNotFoundException: self.logging.info(f"No Function URL found for {function_name}") return False + else: + return True🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sebs/aws/config.py` around lines 388 - 393, The positive return and cache cleanup currently occur inside the try block and should be moved into an else clause so they only run when no exception is raised: after calling the AWS operation that may raise lambda_client.exceptions.ResourceNotFoundException, put the cache removal (del self._function_urls[function_name] if present) and the return True inside an else: block tied to that try/except, leaving the except to only log via self.logging.info(f"No Function URL found for {function_name}").sebs/aws/triggers.py (1)
152-155:ThreadPoolExecutorcreated perasync_invokecall is never shut down.A new pool is allocated on each call with no
pool.shutdown(), leaking threads until GC. This mirrors the pre-existing pattern inHTTPTrigger.async_invoke(lines 114–118). Consider sharing a single class-level executor or callingpool.shutdown(wait=False)after submitting:♻️ Proposed fix
def async_invoke(self, payload: dict) -> concurrent.futures.Future: pool = concurrent.futures.ThreadPoolExecutor() fut = pool.submit(self.sync_invoke, payload) + pool.shutdown(wait=False) return fut🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sebs/aws/triggers.py` around lines 152 - 155, The async_invoke implementation creates a new ThreadPoolExecutor on every call (in async_invoke) and never shuts it down, leaking threads; change this to reuse a single executor (e.g., add a class-level or instance attribute like _executor and initialize a ThreadPoolExecutor once) and have async_invoke submit to that shared executor (calling self.sync_invoke), or if you prefer per-call pools call pool.shutdown(wait=False) after submitting; reference async_invoke, sync_invoke and ThreadPoolExecutor (and mirror the fix used in HTTPTrigger.async_invoke) to locate and update the code.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@sebs/aws/aws.py`:
- Around line 439-444: The delete_function_url method is never invoked, leaving
entries in the in-memory/on-disk _function_urls cache orphaned when
delete_function removes the Lambda on AWS; update delete_function to call
delete_function_url(func_name) after a successful AWS delete (or in its
cleanup/finally block) so the cache and on-disk state are removed (handle and
log any exceptions from delete_function_url and preserve the original
delete_function return semantics), referencing the existing delete_function and
delete_function_url methods and the _function_urls cache for where to apply the
change.
In `@sebs/aws/config.py`:
- Around line 464-469: The code directly sets ret._function_url_auth_type which
bypasses the property setter validation; change that line to use the property
setter (ret.function_url_auth_type = dct.get("function-url-auth-type", "NONE"))
so the value is validated (must be "NONE" or "AWS_IAM"); ensure the setter on
the class (function_url_auth_type property) enforces/raises on invalid values so
bad config entries are rejected instead of silently stored.
- Around line 349-357: Change the broad except Exception block around the
Function URL creation to catch specific, retryable errors (e.g.,
botocore.exceptions.ClientError or the specific boto3 error types) instead of
all exceptions, use self.logging.exception(...) to log the full traceback
(replace self.logging.error(e)), and when giving up raise a RuntimeError(...)
from e to preserve the original exception chain; also ensure non-retryable
exceptions are re-raised immediately rather than counted toward retries
(reference the retry loop surrounding the Function URL creation and the
variables retries, self.logging, and the raised RuntimeError).
In `@sebs/aws/triggers.py`:
- Around line 148-150: sync_invoke currently calls _http_invoke which always
posts unsigned HTTP requests, so Function URLs using AWS_IAM will 403; update
sync_invoke (or _http_invoke) to check self.auth_type and, for "AWS_IAM", sign
the HTTP request with AWS SigV4 (use botocore.auth.SigV4Auth via the existing
boto3/session credentials to add the required headers) before performing the
pycurl POST, or alternatively raise a clear exception for unsupported auth
types; reference sync_invoke, _http_invoke and the self.auth_type attribute when
making the change.
---
Nitpick comments:
In `@sebs/aws/config.py`:
- Around line 388-393: The positive return and cache cleanup currently occur
inside the try block and should be moved into an else clause so they only run
when no exception is raised: after calling the AWS operation that may raise
lambda_client.exceptions.ResourceNotFoundException, put the cache removal (del
self._function_urls[function_name] if present) and the return True inside an
else: block tied to that try/except, leaving the except to only log via
self.logging.info(f"No Function URL found for {function_name}").
In `@sebs/aws/triggers.py`:
- Around line 152-155: The async_invoke implementation creates a new
ThreadPoolExecutor on every call (in async_invoke) and never shuts it down,
leaking threads; change this to reuse a single executor (e.g., add a class-level
or instance attribute like _executor and initialize a ThreadPoolExecutor once)
and have async_invoke submit to that shared executor (calling self.sync_invoke),
or if you prefer per-call pools call pool.shutdown(wait=False) after submitting;
reference async_invoke, sync_invoke and ThreadPoolExecutor (and mirror the fix
used in HTTPTrigger.async_invoke) to locate and update the code.
| return self._function_name | ||
|
|
||
| @property | ||
| def auth_type(self) -> str: |
There was a problem hiding this comment.
What are the available authentication types? "None" and "IAM"? Is there any other alternative?
I think it would be cleaner with a string-based enum instead of a plain string :) This will also make it easier to create asserts (see other comments) and implement if conditions.
|
|
||
| @function_url_auth_type.setter | ||
| def function_url_auth_type(self, value: str): | ||
| if value not in ("NONE", "AWS_IAM"): |
There was a problem hiding this comment.
Makes sense, but please use Enums here.
|
|
||
| if auth_type == "NONE": | ||
| try: | ||
| lambda_client.add_permission( |
There was a problem hiding this comment.
Please use logging to print a warning here - function now has unrestricted public access. Users should be aware of that.
| FunctionUrlAuthType="NONE", | ||
| ) | ||
| except lambda_client.exceptions.ResourceConflictException: | ||
| pass |
There was a problem hiding this comment.
When does this exception happen - if we try to add a permissions to the function that already exists? That should be safe but can we add a comment explaining this to make it clear it is safe to ignore this exception
| retries = 0 | ||
| while retries < 5: | ||
| try: | ||
| response = lambda_client.create_function_url_config( |
There was a problem hiding this comment.
I don't get the logic of this code and there's no comment explaining it:
(1) When can the ResourceConflictException happen if we previously had ResourceNotFoundException?
(2) What other exceptions can happen that force us to repeat? IIRC, boto3 typically creates specific exception types that describe situations, e.g., another resource update is pending or too many requests. We should capture them explicitly to not blindly repeat loop iterations, when the error type indicates that all future requests to create function URLs will also fail.
| self.logging.error(e) | ||
| raise RuntimeError("Failed to create Function URL!") | ||
| else: | ||
| self.logging.info("Function URL creation failed, retrying...") |
There was a problem hiding this comment.
Please inform user of the increasing sleep time (linear back-off).
| self.logging.info(f"Deleted Function URL for {function_name}") | ||
|
|
||
| try: | ||
| lambda_client.remove_permission( |
There was a problem hiding this comment.
Shouldn't this be conditional on the authentication type?
|
@Sharayu1418 Thank you for the contribution, it is much appreciated! I left few comments :) |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (5)
sebs/aws/config.py (3)
31-37:raiseinexceptblock should chain the original error.The
raise ValueError(...)inside theexcept ValueErrordiscards the original traceback. Useraise ... fromto preserve it.Proposed fix
try: return FunctionURLAuthType(value) except ValueError: - raise ValueError( + raise ValueError( f"Invalid auth type '{value}'. Must be one of: " f"{[e.value for e in FunctionURLAuthType]}" - ) + ) from None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sebs/aws/config.py` around lines 31 - 37, The except block in the FunctionURLAuthType conversion swallows the original ValueError traceback; modify the except in the function that calls FunctionURLAuthType (the try/except handling the variable `value`) to re-raise the new ValueError using exception chaining (i.e., `raise ValueError(...) from e`) so the original exception is preserved; locate the try/except that returns FunctionURLAuthType(value) and change the `raise ValueError(...)` to chain from the caught exception.
403-416:logging.exception(e)— the exception argument is redundant.
logging.exceptionalready captures the active exception context automatically. Passingecauses the exception'sstr()to appear twice in the log (once as the message, once as the traceback). Pass a descriptive string instead.Proposed fix
if retries == 5: - self.logging.error("Failed to create Function URL after 5 retries!") - self.logging.exception(e) + self.logging.exception( + "Failed to create Function URL after 5 retries!" + ) raise RuntimeError("Failed to create Function URL!") from e🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sebs/aws/config.py` around lines 403 - 416, The self.logging.exception call is passing the exception object e redundantly; replace self.logging.exception(e) with a descriptive message (e.g. self.logging.exception("Failed to create Function URL after 5 retries")) or use self.logging.error("Failed to create Function URL after 5 retries", exc_info=True) so the stacktrace is still logged; update the block handling lambda_client.exceptions.TooManyRequestsException where retries == 5 to log a clear message without passing the exception object.
356-360: Existing Function URL's auth type may silently differ from the configured auth type.When a pre-existing Function URL is found via
get_function_url_config, itsAuthTypeis used as-is (line 360), even ifself._function_url_auth_typeis set differently. This is probably the correct behavior (respect what's deployed), but a warning when they diverge would help operators notice the mismatch.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sebs/aws/config.py` around lines 356 - 360, When an existing Function URL is discovered via lambda_client.get_function_url_config and you parse its auth type with FunctionURLAuthType.from_string into auth_type, compare that auth_type to the configured self._function_url_auth_type and emit a warning if they differ (include func.name and both values in the message) so operators are notified of the mismatch; update the block where response = lambda_client.get_function_url_config(...) and auth_type is set to perform this comparison and call self.logging.warning with clear context when they diverge.sebs/aws/triggers.py (2)
164-168: NewThreadPoolExecutorcreated per invocation — pre-existing pattern, butshutdownis inconsistent withHTTPTrigger.
FunctionURLTrigger.async_invokecallspool.shutdown(wait=False)on line 167, which is good practice to allow cleanup. However,HTTPTrigger.async_invoke(line 117) does not. Consider aligning both for consistency — either addshutdowntoHTTPTriggeras well, or remove it here.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sebs/aws/triggers.py` around lines 164 - 168, The async_invoke implementation in FunctionURLTrigger calls pool.shutdown(wait=False) but HTTPTrigger.async_invoke does not, causing inconsistency; update HTTPTrigger.async_invoke to mirror FunctionURLTrigger by creating the ThreadPoolExecutor, submitting self.sync_invoke (or the equivalent HTTP sync method) and calling pool.shutdown(wait=False) after submit, or alternatively remove the shutdown call from FunctionURLTrigger.async_invoke so both implementations match — modify the HTTPTrigger.async_invoke or FunctionURLTrigger.async_invoke (referencing the async_invoke methods and the use of ThreadPoolExecutor and pool.shutdown) to make the behavior consistent across triggers.
170-185: Align serialization key naming convention betweenHTTPTriggerandFunctionURLTrigger.
HTTPTriggeruses hyphens (api-id) whileFunctionURLTriggerandAWSResources.FunctionURLboth use underscores (function_name,auth_type). Additionally,AWSResources.serialize()uses hyphens for function URL keys (function-urls,use-function-url,function-url-auth-type), creating inconsistency between the parent class and nestedFunctionURLserialization.Change both
FunctionURLTrigger.serialize()/deserialize()andAWSResources.FunctionURL.serialize()/deserialize()to use hyphens for consistency with the rest of the codebase.Proposed changes
In
sebs/aws/triggers.py:def serialize(self) -> dict: return { "type": "FunctionURL", "url": self.url, - "function_name": self.function_name, - "auth_type": self.auth_type.value, + "function-name": self.function_name, + "auth-type": self.auth_type.value, } `@staticmethod` def deserialize(obj: dict) -> Trigger: - auth_type_str = obj.get("auth_type", "NONE") + auth_type_str = obj.get("auth-type", "NONE") return FunctionURLTrigger( obj["url"], - obj["function_name"], + obj["function-name"], FunctionURLAuthType.from_string(auth_type_str), )In
sebs/aws/config.py, updateAWSResources.FunctionURL.serialize()/deserialize()similarly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sebs/aws/triggers.py` around lines 170 - 185, The FunctionURLTrigger and AWSResources.FunctionURL serializers/deserializers must use hyphenated keys to match HTTPTrigger and AWSResources conventions: update FunctionURLTrigger.serialize() to output keys "function-name" and "auth-type" (instead of "function_name"/"auth_type") and change its deserialize() to read those hyphenated keys and pass the value through FunctionURLAuthType.from_string; likewise update AWSResources.FunctionURL.serialize()/deserialize() to produce/consume "function-name", "auth-type" and any function-URL list keys to use hyphens (matching AWSResources.serialize() use of "function-urls", "use-function-url", "function-url-auth-type"); ensure all renamed keys are consistently used across both serialize and deserialize methods.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@sebs/aws/config.py`:
- Around line 344-349: The current NotImplementedError check for
FunctionURLAuthType.AWS_IAM runs only when creating a new URL and can be
bypassed for cached or externally-created URLs; move the AWS_IAM guard so it
executes after resolving the URL object (i.e. after function_url() returns its
value or after get_function_url_config() path) to ensure any returned URL with
auth_type == FunctionURLAuthType.AWS_IAM raises immediately; alternatively
ensure FunctionURLTrigger.sync_invoke contains the definitive guard, but if
keeping it here, inspect the resolved object’s auth_type and raise
NotImplementedError for AWS_IAM before returning the URL.
---
Nitpick comments:
In `@sebs/aws/config.py`:
- Around line 31-37: The except block in the FunctionURLAuthType conversion
swallows the original ValueError traceback; modify the except in the function
that calls FunctionURLAuthType (the try/except handling the variable `value`) to
re-raise the new ValueError using exception chaining (i.e., `raise
ValueError(...) from e`) so the original exception is preserved; locate the
try/except that returns FunctionURLAuthType(value) and change the `raise
ValueError(...)` to chain from the caught exception.
- Around line 403-416: The self.logging.exception call is passing the exception
object e redundantly; replace self.logging.exception(e) with a descriptive
message (e.g. self.logging.exception("Failed to create Function URL after 5
retries")) or use self.logging.error("Failed to create Function URL after 5
retries", exc_info=True) so the stacktrace is still logged; update the block
handling lambda_client.exceptions.TooManyRequestsException where retries == 5 to
log a clear message without passing the exception object.
- Around line 356-360: When an existing Function URL is discovered via
lambda_client.get_function_url_config and you parse its auth type with
FunctionURLAuthType.from_string into auth_type, compare that auth_type to the
configured self._function_url_auth_type and emit a warning if they differ
(include func.name and both values in the message) so operators are notified of
the mismatch; update the block where response =
lambda_client.get_function_url_config(...) and auth_type is set to perform this
comparison and call self.logging.warning with clear context when they diverge.
In `@sebs/aws/triggers.py`:
- Around line 164-168: The async_invoke implementation in FunctionURLTrigger
calls pool.shutdown(wait=False) but HTTPTrigger.async_invoke does not, causing
inconsistency; update HTTPTrigger.async_invoke to mirror FunctionURLTrigger by
creating the ThreadPoolExecutor, submitting self.sync_invoke (or the equivalent
HTTP sync method) and calling pool.shutdown(wait=False) after submit, or
alternatively remove the shutdown call from FunctionURLTrigger.async_invoke so
both implementations match — modify the HTTPTrigger.async_invoke or
FunctionURLTrigger.async_invoke (referencing the async_invoke methods and the
use of ThreadPoolExecutor and pool.shutdown) to make the behavior consistent
across triggers.
- Around line 170-185: The FunctionURLTrigger and AWSResources.FunctionURL
serializers/deserializers must use hyphenated keys to match HTTPTrigger and
AWSResources conventions: update FunctionURLTrigger.serialize() to output keys
"function-name" and "auth-type" (instead of "function_name"/"auth_type") and
change its deserialize() to read those hyphenated keys and pass the value
through FunctionURLAuthType.from_string; likewise update
AWSResources.FunctionURL.serialize()/deserialize() to produce/consume
"function-name", "auth-type" and any function-URL list keys to use hyphens
(matching AWSResources.serialize() use of "function-urls", "use-function-url",
"function-url-auth-type"); ensure all renamed keys are consistently used across
both serialize and deserialize methods.
| # Check for unsupported auth type before attempting to create | ||
| if self._function_url_auth_type == FunctionURLAuthType.AWS_IAM: | ||
| raise NotImplementedError( | ||
| "AWS_IAM authentication for Function URLs is not yet supported. " | ||
| "SigV4 request signing is required for AWS_IAM auth type. " | ||
| "Please use auth_type='NONE' or implement SigV4 signing." |
There was a problem hiding this comment.
AWS_IAM guard is only at creation time — a cached AWS_IAM URL will bypass this check.
If a Function URL with AWS_IAM auth was created outside this tool (or in a future version), function_url() returns it from the cache at line 340-342 or from the get_function_url_config path at lines 356-360 without ever hitting the NotImplementedError guard. The caller (and ultimately FunctionURLTrigger.sync_invoke) will fail later with an unsigned request producing a 403.
Consider moving the guard to after the URL object is resolved (or rely solely on the trigger-side guard in FunctionURLTrigger.sync_invoke, which already exists), rather than only blocking the creation path.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@sebs/aws/config.py` around lines 344 - 349, The current NotImplementedError
check for FunctionURLAuthType.AWS_IAM runs only when creating a new URL and can
be bypassed for cached or externally-created URLs; move the AWS_IAM guard so it
executes after resolving the URL object (i.e. after function_url() returns its
value or after get_function_url_config() path) to ensure any returned URL with
auth_type == FunctionURLAuthType.AWS_IAM raises immediately; alternatively
ensure FunctionURLTrigger.sync_invoke contains the definitive guard, but if
keeping it here, inspect the resolved object’s auth_type and raise
NotImplementedError for AWS_IAM before returning the URL.
|
@Sharayu1418 I saw that you made some changes - do you plan to finish the PR? |
ef3163d to
313ebd1
Compare
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@sebs/aws/config.py`:
- Around line 940-942: AWSResources.deserialize is persisting runtime flags
(_use_function_url and function_url_auth_type) from the serialized dct which
makes those settings "sticky" when a cached AWSResources is reused; change
deserialize so it does NOT read or set ret._use_function_url and
ret.function_url_auth_type from the cached dct (remove or ignore the
dct.get("use-function-url", ...) and dct.get("function-url-auth-type", ...)
lines), and instead ensure these runtime flags are applied from the current
runtime/config after deserialize returns (or add parameters to
AWSResources.deserialize to accept current runtime values and set them
post-cache-load). Also apply the same change for the equivalent code around the
other occurrence (lines handling the same keys at 991-997) so runtime settings
are always taken from the current config rather than the cache.
- Around line 596-602: The code currently accepts FunctionURLAuthType.AWS_IAM in
the setter/cache flow but later raises NotImplementedError at trigger creation;
fix this by failing fast during config parsing/validation: update the
setter/validation logic that assigns self._function_url_auth_type (and any cache
population code that accepts the enum) to reject FunctionURLAuthType.AWS_IAM and
raise a clear ValueError (or ConfigError) stating that AWS_IAM is unsupported
and to use auth_type='NONE' or implement SigV4 signing; ensure the invalid value
is never stored in the cache/state and add/update tests to assert the early
rejection.
- Around line 676-714: delete_function_url currently only removes the in-memory
cache on the success path, so when delete_function_url_config raises
ResourceNotFoundException the cached entry and the persisted
aws.resources.function-urls.* key can remain and later resurrect a deleted URL;
update delete_function_url (the method) to always remove the in-memory entry
from self._function_urls before returning (both on successful delete and in the
except lambda_client.exceptions.ResourceNotFoundException branch) and also
ensure you remove/clear the corresponding persisted
aws.resources.function-urls.* key or call the same cleanup/upsert removal used
elsewhere (so AWSResources.initialize cannot rehydrate a deleted URL).
- Around line 616-673: The code seeds auth_type locally but on the
ResourceConflictException path it only refreshes response, so if another process
created the Function URL with a different AuthType we record the wrong auth_type
in AWSResources.FunctionURL; after the get_function_url_config() response (and
also after create_function_url_config() success) resolve the actual auth type
from response["AuthType"] and set auth_type accordingly (e.g. map to
FunctionURLAuthType) before constructing AWSResources.FunctionURL and storing
into self._function_urls; ensure any permission logic (e.g. the
add_permission/StatementId "FunctionURLAllowPublicAccess" branch) aligns with
the resolved auth_type.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: e8be2821-ff5c-4da2-b027-398856973a69
📒 Files selected for processing (5)
configs/example.jsonsebs/aws/aws.pysebs/aws/config.pysebs/aws/function.pysebs/aws/triggers.py
✅ Files skipped from review due to trivial changes (1)
- configs/example.json
🚧 Files skipped from review as they are similar to previous changes (2)
- sebs/aws/function.py
- sebs/aws/triggers.py
| # Check for unsupported auth type before attempting to create | ||
| if self._function_url_auth_type == FunctionURLAuthType.AWS_IAM: | ||
| raise NotImplementedError( | ||
| "AWS_IAM authentication for Function URLs is not yet supported. " | ||
| "SigV4 request signing is required for AWS_IAM auth type. " | ||
| "Please use auth_type='NONE' or implement SigV4 signing." | ||
| ) |
There was a problem hiding this comment.
AWS_IAM is accepted in config but unusable at runtime.
The new enum/setter/cache flow all allow FunctionURLAuthType.AWS_IAM, but Lines 596-602 unconditionally raise NotImplementedError when the trigger is created. Either implement signed Function URL invocation end-to-end or reject AWS_IAM during config parsing so users fail fast before deployment.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@sebs/aws/config.py` around lines 596 - 602, The code currently accepts
FunctionURLAuthType.AWS_IAM in the setter/cache flow but later raises
NotImplementedError at trigger creation; fix this by failing fast during config
parsing/validation: update the setter/validation logic that assigns
self._function_url_auth_type (and any cache population code that accepts the
enum) to reject FunctionURLAuthType.AWS_IAM and raise a clear ValueError (or
ConfigError) stating that AWS_IAM is unsupported and to use auth_type='NONE' or
implement SigV4 signing; ensure the invalid value is never stored in the
cache/state and add/update tests to assert the early rejection.
| ret._use_function_url = dct.get("use-function-url", False) | ||
| auth_type_str = dct.get("function-url-auth-type", "NONE") | ||
| ret.function_url_auth_type = FunctionURLAuthType.from_string(auth_type_str) |
There was a problem hiding this comment.
These two flags become sticky once a cache exists.
Because AWSResources.deserialize() prefers cached resources, persisting use-function-url and function-url-auth-type here means later config changes are ignored until the user clears the cache. Those are runtime settings, not discovered AWS resources, so they should come from the current config on each run.
Also applies to: 991-997
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@sebs/aws/config.py` around lines 940 - 942, AWSResources.deserialize is
persisting runtime flags (_use_function_url and function_url_auth_type) from the
serialized dct which makes those settings "sticky" when a cached AWSResources is
reused; change deserialize so it does NOT read or set ret._use_function_url and
ret.function_url_auth_type from the cached dct (remove or ignore the
dct.get("use-function-url", ...) and dct.get("function-url-auth-type", ...)
lines), and instead ensure these runtime flags are applied from the current
runtime/config after deserialize returns (or add parameters to
AWSResources.deserialize to accept current runtime values and set them
post-cache-load). Also apply the same change for the equivalent code around the
other occurrence (lines handling the same keys at 991-997) so runtime settings
are always taken from the current config rather than the cache.
We always apply permission policies + add cleanup
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
tests/aws/test_function_url.py (1)
113-124: This checks idempotence, not cache usage.The test still passes if
function_url()round-trips to AWS every time, because the same Lambda Function URL is expected to be returned either way. If you want this to guard the cache behavior, assert against the local resource cache state or add a unit test that verifies no secondget/create_function_url_configcall is made.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/aws/test_function_url.py` around lines 113 - 124, The current test test_function_url_caching asserts idempotence but not that the local cache is used; change it to verify caching by either (A) inspecting the function's local cache/state after the first call (e.g., assert a cached URL entry exists on the Function instance returned by _get_function or a known cache property) or (B) mock the AWS API calls (the get/create_function_url_config calls invoked by deployment_client.create_trigger / Function.triggers) and assert those AWS calls are only invoked once across the two create_trigger invocations; update test_function_url_caching to use one of these approaches so it fails if the code hits AWS twice.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@sebs/aws/config.py`:
- Around line 349-351: The current constructor and initialize() set
self._use_function_url default to True which causes absent config/cache entries
to flip deployments to Function URLs; change the default to False and ensure
initialize() and any config-loading logic check for the explicit presence of the
flag rather than assuming True when the flag is missing. Specifically, update
the constructor assignment of self._use_function_url to False, update
initialize() to default to False when no flag is provided, and modify any code
that reads the flag to use an explicit presence check (e.g., key in dict) before
assigning to self._use_function_url so missing entries preserve API Gateway as
the default.
- Around line 603-619: The cleanup collects URLs instead of function names, so
cache eviction and _function_urls.pop never match; in the loop that iterates
dict_copy.items() (using self._function_urls, func_name, func_url and calling
delete_function_url) change the appended value from func_url.url to func_name so
the later loop uses the correct keys for
cache_client.remove_config_key(["aws","resources","function-urls", func_name])
and self._function_urls.pop(func_name, None); keep the dry_run handling and
logging as-is.
---
Nitpick comments:
In `@tests/aws/test_function_url.py`:
- Around line 113-124: The current test test_function_url_caching asserts
idempotence but not that the local cache is used; change it to verify caching by
either (A) inspecting the function's local cache/state after the first call
(e.g., assert a cached URL entry exists on the Function instance returned by
_get_function or a known cache property) or (B) mock the AWS API calls (the
get/create_function_url_config calls invoked by deployment_client.create_trigger
/ Function.triggers) and assert those AWS calls are only invoked once across the
two create_trigger invocations; update test_function_url_caching to use one of
these approaches so it fails if the code hits AWS twice.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: fed56956-1cea-49d7-ae17-00fe7b4a4049
📒 Files selected for processing (3)
sebs/aws/aws.pysebs/aws/config.pytests/aws/test_function_url.py
|
Closes #273 |
Summary
Changes
FunctionURLTriggerclass insebs/aws/triggers.py- HTTP trigger implementation using Lambda Function URLssebs/aws/config.py:use-function-url(bool, default:false) - Enable Function URLs instead of API Gatewayfunction-url-auth-type("NONE"|"AWS_IAM", default:"NONE") - Authentication typedelete_function_url()method for resource cleanupsebs/aws/aws.pyto check config flag and create appropriate trigger typeSummary by CodeRabbit