Skip to content

Commit

Permalink
probe: stlink: Fix 1-byte transfers
Browse files Browse the repository at this point in the history
Internally the 1-byte transfers are handled in 3 phases:
1. read/write 8-bit chunks until the first aligned address is reached,
2. read/write 32-bit chunks from all aligned addresses,
3. read/write 8-bit chunks from the remaining unaligned addresses.

Size of the first unaligned read/write is set to the result of address
alignment check (4-byte) and can be either 1, 2, or 3 bytes (the value
of `unaligned_count` calculated as `addr & 0x3`). This is incorrect and
every transfer with the requested size smaller than `unaligned_count` is
terminated with the following error:

  Unhandled exception in handle_message (b'm'): result size (3) != requested size (1) [gdbserver]

Skip the first unaligned transfer if the requested size is so small that
phase-1 would not even reach aligned address. Handle the whole request
in the second unaligned read/write (phase-3).
  • Loading branch information
fkjagodzinski committed Nov 28, 2022
1 parent 72299f5 commit 44319b8
Showing 1 changed file with 14 additions and 7 deletions.
21 changes: 14 additions & 7 deletions pyocd/probe/stlink_probe.py
Original file line number Diff line number Diff line change
Expand Up @@ -328,21 +328,28 @@ def read_memory_block8(self, addr: int, size: int, **attrs: Any) -> Sequence[int
csw = attrs.get('csw', 0)
res = []

# read leading unaligned bytes
unaligned_count = addr & 3
if (size > 0) and (unaligned_count > 0):
# Transfers are handled in 3 phases:
# 1. read 8-bit chunks until the first aligned address is reached,
# 2. read 32-bit chunks from all aligned addresses,
# 3. read 8-bit chunks from the remaining unaligned addresses.
# If the requested size is so small that phase-1 would not even reach
# aligned address, go straight to phase-3.

# 1. read leading unaligned bytes
unaligned_count = 3 & (4 - addr)
if (size > unaligned_count > 0):
res += self._link.read_mem8(addr, unaligned_count, self._apsel, csw)
size -= unaligned_count
addr += unaligned_count

# read aligned block of 32 bits
# 2. read aligned block of 32 bits
if (size >= 4):
aligned_size = size & ~3
res += self._link.read_mem32(addr, aligned_size, self._apsel, csw)
size -= aligned_size
addr += aligned_size

# read trailing unaligned bytes
# 3. read trailing unaligned bytes
if (size > 0):
res += self._link.read_mem8(addr, size, self._apsel, csw)

Expand All @@ -355,8 +362,8 @@ def write_memory_block8(self, addr: int, data: Sequence[int], **attrs: Any) -> N
idx = 0

# write leading unaligned bytes
unaligned_count = addr & 3
if (size > 0) and (unaligned_count > 0):
unaligned_count = 3 & (4 - addr)
if (size > unaligned_count > 0):
self._link.write_mem8(addr, data[:unaligned_count], self._apsel, csw)
size -= unaligned_count
addr += unaligned_count
Expand Down

0 comments on commit 44319b8

Please sign in to comment.