Skip to content

Fix Cloudflare tunnel error for WriteBot VM#87

Merged
ariedotcodotnz merged 3 commits intomainfrom
claude/fix-writebot-cloudflare-F5O2v
Jan 7, 2026
Merged

Fix Cloudflare tunnel error for WriteBot VM#87
ariedotcodotnz merged 3 commits intomainfrom
claude/fix-writebot-cloudflare-F5O2v

Conversation

@ariedotcodotnz
Copy link
Owner

No description provided.

claude added 3 commits January 6, 2026 22:44
- Upgrade CUDA from 12.3.2 to 13.0.1 for Blackwell architecture
- Upgrade Ubuntu base from 22.04 to 24.04
- Upgrade TensorFlow to 2.18+ with CUDA 13.0 support
- Upgrade tensorflow-probability to 0.25.0+
- Replace internal TF APIs (_maybe_tensor_shape_from_tensor, _concat,
  assert_like_rnncell) with local implementations in operations.py
- Replace deprecated is_in_graph_mode.IS_IN_GRAPH_MODE() with
  tf.executing_eagerly() check
- Replace tf.experimental.numpy.ones_like with tf.ones_like in
  LSTMAttentionCell.py

These changes ensure compatibility with TensorFlow 2.18+ which is
required for CUDA 13.0 and RTX 50 series (Blackwell) GPU support.
TF 2.16+ defaults to Keras 3 which breaks TF1 compat code.
- Install tf-keras package for Keras 2 implementation
- Set TF_USE_LEGACY_KERAS=1 to make tf.keras resolve to Keras 2

This ensures the TF1 compat code (tfcompat.keras.initializers,
mixed_precision, etc.) continues to work correctly.
Copilot AI review requested due to automatic review settings January 7, 2026 00:30
@ariedotcodotnz ariedotcodotnz merged commit 82fe873 into main Jan 7, 2026
9 of 12 checks passed
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR aims to fix Cloudflare tunnel errors for the WriteBot VM by updating TensorFlow compatibility, upgrading dependencies, and migrating to newer CUDA/Ubuntu versions for RTX 50 series GPU support.

  • Replaces removed TensorFlow internal functions with custom implementations for TF 2.x compatibility
  • Updates API calls to use stable TensorFlow APIs instead of experimental ones
  • Upgrades CUDA, Ubuntu, TensorFlow, and related dependencies to support RTX 50 series (Blackwell) GPUs

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 6 comments.

File Description
handwriting_synthesis/rnn/operations.py Implements custom versions of deprecated TensorFlow RNN utilities and updates graph mode detection
handwriting_synthesis/rnn/LSTMAttentionCell.py Replaces experimental API with stable tf.ones_like
Dockerfile.gpu Upgrades CUDA to 13.0.1, Ubuntu to 24.04, TensorFlow to 2.18+, and adds Keras 2 compatibility

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.


# Install TensorFlow with CUDA support
RUN pip install --no-cache-dir --user tensorflow[and-cuda]>=2.15.0 tensorflow-probability>=0.23.0
# Install TensorFlow 2.18+ with CUDA 13.0 support for RTX 50 series (Blackwell)
Copy link

Copilot AI Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment references "CUDA 13.0" which does not exist. As of January 2025, CUDA 13 has not been released. Update this comment to reflect the actual CUDA version being used.

Suggested change
# Install TensorFlow 2.18+ with CUDA 13.0 support for RTX 50 series (Blackwell)
# Install TensorFlow 2.18+ with CUDA support (version set via CUDA_VERSION) for RTX 50 series (Blackwell)

Copilot uses AI. Check for mistakes.
ENV TF_FORCE_GPU_ALLOW_GROWTH=true
# Enable TensorFloat-32 for RTX 30/40/50 series
ENV TF_ENABLE_TF32=1
# Use Keras 2 (tf-keras) instead of Keras 3 for TF1 compat code compatibility
Copy link

Copilot AI Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment mentions "TF1 compat code compatibility" which contains redundant wording. It should be simplified to avoid the awkward "compat code compatibility" phrase.

Suggested change
# Use Keras 2 (tf-keras) instead of Keras 3 for TF1 compat code compatibility
# Use Keras 2 (tf-keras) instead of Keras 3 for TF1 compatibility

Copilot uses AI. Check for mistakes.
conditions = [
hasattr(cell, "output_size"),
hasattr(cell, "state_size"),
hasattr(cell, "__call__") or callable(cell),
Copy link

Copilot AI Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The condition hasattr(cell, "__call__") or callable(cell) is redundant. The callable() function returns True if the object has a __call__ method, so hasattr(cell, "__call__") is unnecessary when callable(cell) is also checked. Simplify to just callable(cell).

Suggested change
hasattr(cell, "__call__") or callable(cell),
callable(cell),

Copilot uses AI. Check for mistakes.
Comment on lines +26 to +30
p = tf.constant(prefix, dtype=tf.int32) if not isinstance(prefix, list) else tf.constant([prefix], dtype=tf.int32)
if isinstance(suffix, ops.Tensor):
s = suffix
else:
s = tf.constant(suffix, dtype=tf.int32) if not isinstance(suffix, list) else tf.constant(suffix, dtype=tf.int32)
Copy link

Copilot AI Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The logic for handling list vs non-list prefix is inverted. Currently: if prefix is NOT a list, it's not wrapped; if it IS a list, it gets wrapped in another list. This should be reversed: scalar values should be wrapped to make them 1D arrays, while lists should be used as-is. The same issue exists for the suffix handling on line 30.

Suggested change
p = tf.constant(prefix, dtype=tf.int32) if not isinstance(prefix, list) else tf.constant([prefix], dtype=tf.int32)
if isinstance(suffix, ops.Tensor):
s = suffix
else:
s = tf.constant(suffix, dtype=tf.int32) if not isinstance(suffix, list) else tf.constant(suffix, dtype=tf.int32)
p = tf.constant(prefix, dtype=tf.int32) if isinstance(prefix, list) else tf.constant([prefix], dtype=tf.int32)
if isinstance(suffix, ops.Tensor):
s = suffix
else:
s = tf.constant(suffix, dtype=tf.int32) if isinstance(suffix, list) else tf.constant([suffix], dtype=tf.int32)

Copilot uses AI. Check for mistakes.


def _concat(prefix, suffix, static=False):
"""Concat prefix and suffix, handling both static and dynamic shapes."""
Copy link

Copilot AI Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The static parameter is defined but never used in the function body. Either implement the static/dynamic handling logic that this parameter suggests, or remove it from the function signature.

Suggested change
"""Concat prefix and suffix, handling both static and dynamic shapes."""
"""Concat prefix and suffix, handling both static and dynamic shapes."""
# If requested, perform purely static (Python-level) concatenation when possible.
if static and not isinstance(prefix, ops.Tensor) and not isinstance(suffix, ops.Tensor):
# Convert TensorShape objects to lists for concatenation.
if isinstance(prefix, tensor_shape.TensorShape):
prefix = prefix.as_list()
if isinstance(suffix, tensor_shape.TensorShape):
suffix = suffix.as_list()
# Only perform static concatenation when both are sequence-like.
if isinstance(prefix, (list, tuple)) and isinstance(suffix, (list, tuple)):
return prefix + suffix

Copilot uses AI. Check for mistakes.
# GPU-enabled Dockerfile for WriteBot
# Optimized for NVIDIA RTX 50 series (Blackwell) and RTX 40/30 series GPUs
# Uses CUDA 12.x for best performance with modern GPUs
# Uses CUDA 13.x for RTX 50 series (Blackwell) support
Copy link

Copilot AI Jan 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment references "CUDA 13.x" which does not exist. CUDA 13 has not been released as of January 2025. Update this comment to reflect the actual CUDA version being used.

Suggested change
# Uses CUDA 13.x for RTX 50 series (Blackwell) support
# Uses the configured CUDA runtime (via CUDA_VERSION) for RTX 50 series (Blackwell) support

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants