Conversation
|
The latest Buf updates on your PR. Results from workflow CI and Release / buf-check (pull_request).
|
Summary of ChangesHello @julienrbrt, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request, explicitly marked as 'DO NOT MERGE', appears to be a temporary branch for debugging or testing a recovery mechanism. It significantly alters the system's operational flow by disabling key components, hardcoding data retrieval, and simplifying the data submission process to isolate and test specific behaviors related to recovery. The changes are not intended for production and serve as a development snapshot. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
Code Review for PR #2730: DO NOT MERGE fix: recovery🔴 Critical Security & Stability Issues1. Hard-coded Height Value (CRITICAL BUG)Location: item, err := pb.fetch(ctx, pb.store, 2402427)Issues:
Impact: HIGH - This breaks the core data availability submission mechanism 2. Process Termination with os.Exit(0) (CRITICAL)Location: os.Exit(0)Issues:
Impact: HIGH - Causes abrupt process termination, potential data corruption, and prevents proper cleanup Recommendation: Remove 3. Commented Out Critical ComponentsLocation: The Executor and Reaper components are completely commented out:
Issues:
Impact: CRITICAL - Aggregator nodes will fail to start or panic at runtime 4. Disabled DA Inclusion Processing LoopLocation: // go func() {
// defer s.wg.Done()
// s.processDAInclusionLoop()
// }()Issues:
Impact: HIGH - Breaks finality mechanism across all node types 5. Disabled Last Submitted Height TrackingLocation: // if l := len(submitted); l > 0 {
// lastHeight := submitted[l-1].Height()
// cache.SetLastSubmittedDataHeight(ctx, lastHeight)
// }Issues:
Impact: MEDIUM - Cache inconsistency leading to incorrect DA submission behavior 6. Completely Gutted DA Submission LoopLocation: The entire timer-based submission logic is commented out:
Impact: CRITICAL - Completely breaks the DA submission mechanism 🟡 Code Quality Issues
📋 RecommendationsThis PR appears to be debug/development code that should not be merged:
If this is intended as a fix:
Security Note: While this appears to be debugging code rather than malicious, the hard-coded height and abrupt process termination create significant reliability and data integrity risks. ✅ What to Do NextBased on the PR title "DO NOT MERGE fix: recovery", this appears to be a work-in-progress debugging branch. Consider:
🤖 Generated with Claude Code |
There was a problem hiding this comment.
Code Review
This pull request, titled 'DO NOT MERGE fix: recovery', appears to be a temporary solution for a recovery task. It introduces several changes that are unsuitable for a production environment, such as hardcoding a block height, disabling core processing loops, and including an os.Exit(0) call that would terminate the node. While these may be useful for a one-off debugging or recovery script, they introduce critical issues and must be reverted before this PR can be considered for merging. Additionally, there are large blocks of commented-out code that should be cleaned up to maintain code quality.
| pending := make([]T, 0) | ||
| item, err := pb.fetch(ctx, pb.store, 2402427) | ||
| if err != nil { | ||
| return nil, err | ||
| } | ||
| if lastSubmitted == height { | ||
| return nil, nil | ||
| } | ||
| if lastSubmitted > height { | ||
| return nil, fmt.Errorf("height of last submitted item (%d) is greater than height of last item (%d)", lastSubmitted, height) | ||
| } | ||
| pending := make([]T, 0, height-lastSubmitted) | ||
| for i := lastSubmitted + 1; i <= height; i++ { | ||
| item, err := pb.fetch(ctx, pb.store, i) | ||
| if err != nil { | ||
| return pending, err | ||
| } | ||
| pending = append(pending, item) | ||
| return pending, err | ||
| } | ||
| pending = append(pending, item) | ||
| return pending, nil |
There was a problem hiding this comment.
The getPending function has been modified to fetch a hardcoded height 2402427. This breaks the core functionality of fetching all pending items based on the last submitted height. This change must be reverted to restore the correct behavior. Hardcoding values like this is dangerous and makes the system brittle.
| pending := make([]T, 0) | |
| item, err := pb.fetch(ctx, pb.store, 2402427) | |
| if err != nil { | |
| return nil, err | |
| } | |
| if lastSubmitted == height { | |
| return nil, nil | |
| } | |
| if lastSubmitted > height { | |
| return nil, fmt.Errorf("height of last submitted item (%d) is greater than height of last item (%d)", lastSubmitted, height) | |
| } | |
| pending := make([]T, 0, height-lastSubmitted) | |
| for i := lastSubmitted + 1; i <= height; i++ { | |
| item, err := pb.fetch(ctx, pb.store, i) | |
| if err != nil { | |
| return pending, err | |
| } | |
| pending = append(pending, item) | |
| return pending, err | |
| } | |
| pending = append(pending, item) | |
| return pending, nil | |
| lastSubmitted := pb.lastHeight.Load() | |
| height, err := pb.store.Height(ctx) | |
| if err != nil { | |
| return nil, err | |
| } | |
| if lastSubmitted == height { | |
| return nil, nil | |
| } | |
| if lastSubmitted > height { | |
| return nil, fmt.Errorf("height of last submitted item (%d) is greater than height of last item (%d)", lastSubmitted, height) | |
| } | |
| pending := make([]T, 0, height-lastSubmitted) | |
| for i := lastSubmitted + 1; i <= height; i++ { | |
| item, err := pb.fetch(ctx, pb.store, i) | |
| if err != nil { | |
| return pending, err | |
| } | |
| pending = append(pending, item) | |
| } | |
| return pending, nil |
There was a problem hiding this comment.
The daSubmissionLoop has been drastically changed. The periodic submission logic is commented out and replaced with a single submission attempt followed by os.Exit(0). Calling os.Exit in a service like this is extremely dangerous and will cause the node to terminate unexpectedly. This entire block should be reverted to its original implementation with the ticker-based loop for continuous operation.
ticker := time.NewTicker(s.config.DA.BlockTime.Duration)
defer ticker.Stop()
metricsTicker := time.NewTicker(30 * time.Second)
defer metricsTicker.Stop()
for {
select {
case <-s.ctx.Done():
return
case <-ticker.C:
// Submit headers
if s.cache.NumPendingHeaders() != 0 {
if s.headerSubmissionMtx.TryLock() {
go func() {
defer s.headerSubmissionMtx.Unlock()
if err := s.daSubmitter.SubmitHeaders(s.ctx, s.cache); err != nil {
s.logger.Error().Err(err).Msg("failed to submit headers")
}
}()
}
}
// Submit data
if s.cache.NumPendingData() != 0 {
if s.dataSubmissionMtx.TryLock() {
go func() {
defer s.dataSubmissionMtx.Unlock()
if err := s.daSubmitter.SubmitData(s.ctx, s.cache, s.signer, s.genesis); err != nil {
s.logger.Error().Err(err).Msg("failed to submit data")
}
}()
}
}
case <-metricsTicker.C:
s.updateMetrics()
}
}| // if l := len(submitted); l > 0 { | ||
| // lastHeight := submitted[l-1].Height() | ||
| // cache.SetLastSubmittedDataHeight(ctx, lastHeight) | ||
| // } |
There was a problem hiding this comment.
The logic to update the last submitted data height has been commented out. This will prevent the system from tracking which data has been submitted to the DA layer, likely causing data to be re-submitted repeatedly. This should be uncommented to ensure correct operation.
| // if l := len(submitted); l > 0 { | |
| // lastHeight := submitted[l-1].Height() | |
| // cache.SetLastSubmittedDataHeight(ctx, lastHeight) | |
| // } | |
| if l := len(submitted); l > 0 { | |
| lastHeight := submitted[l-1].Height() | |
| cache.SetLastSubmittedDataHeight(ctx, lastHeight) | |
| } |
| defer s.logger.Info().Msg("DA submission loop stopped") | ||
|
|
||
| ticker := time.NewTicker(s.config.DA.BlockTime.Duration) | ||
| defer ticker.Stop() |
There was a problem hiding this comment.
The goroutine that starts processDAInclusionLoop has been commented out. This disables DA inclusion processing, which is a core feature for ensuring data availability. This should be uncommented to restore the node's functionality.
| // go func() { | |
| // defer s.wg.Done() | |
| // s.processDAInclusionLoop() | |
| // }() | |
| go func() { | |
| defer s.wg.Done() | |
| s.processDAInclusionLoop() | |
| }() |
| // executor, err := executing.NewExecutor( | ||
| // store, | ||
| // exec, | ||
| // sequencer, | ||
| // signer, | ||
| // cacheManager, | ||
| // metrics, | ||
| // config, | ||
| // genesis, | ||
| // headerBroadcaster, | ||
| // dataBroadcaster, | ||
| // logger, | ||
| // blockOpts, | ||
| // errorCh, | ||
| // ) | ||
| // if err != nil { | ||
| // return nil, fmt.Errorf("failed to create executor: %w", err) | ||
| // } | ||
|
|
||
| reaper, err := reaping.NewReaper( | ||
| exec, | ||
| sequencer, | ||
| genesis, | ||
| logger, | ||
| executor, | ||
| reaping.DefaultInterval, | ||
| ) | ||
| if err != nil { | ||
| return nil, fmt.Errorf("failed to create reaper: %w", err) | ||
| } | ||
| // reaper, err := reaping.NewReaper( | ||
| // exec, | ||
| // sequencer, | ||
| // genesis, | ||
| // logger, | ||
| // executor, | ||
| // reaping.DefaultInterval, | ||
| // ) | ||
| // if err != nil { | ||
| // return nil, fmt.Errorf("failed to create reaper: %w", err) | ||
| // } |
There was a problem hiding this comment.
This large block of commented-out code for creating the executor and reaper should be removed. If this logic is no longer needed, the code should be deleted. If it's temporarily disabled for debugging, it should be restored before this pull request is merged. Leaving commented-out code reduces maintainability.
| "context" | ||
| "encoding/binary" | ||
| "fmt" | ||
| "os" |
|
done. |
Overview