Point form for speedy writing. 80% correct at the time of writing. Just remind myself what I did. Timestamp is not accurate as I may recall things and writing something I learn some time ago.
Table of Contents
Area | Product | Date |
---|---|---|
Azure | Azure Container Apps | 2024-01-27 |
Azure | Azure Cosmos DB Core | 2023-12-24 2023-12-26 2024-01-07 |
Azure | Azure Functions | 2023-12-25 2023-12-26 2024-01-08 |
Azure | Azure Managed Identity | 2024-07-13 |
Clover | Rest API | 2024-12-21 |
JavaScript | Iterable/iterator/generator | 2024-01-22 |
JavaScript | Node.js Test Runner | 2025-07-20 |
JavaScript | ReadableStream | 2025-03-29 |
JavaScript | Valibot | 2024-01-10 |
Raspberry Pi | CUPS | 2025-02-15 |
Raspberry Pi | Pi-Hole | 2024-02-27 2023-12 |
Raspberry Pi | WireGuard | 2025-02-15 2025-06-16 |
React | Fluent UI | 2023-12-25 |
Hardware | Happy Hacking Keyboard | 2023-12-24 |
"Adding features means touching code. Touching code could introduce bugs. As developer, bugs are nightmare. We should appreciate courageous developers who add features without fear of bugs. They take the risk and manage it well."
ReactNode
vs.ReactElement
- (Input, broader)
children
prop isReactNode
- (Output, narrower)
FunctionComponent<P>
is(props: P) => ReactElement | null
- (Input, broader)
- To hide a prop, instead of using a private/random value as the prop key, use React context
- To turn a render function into a component:
function MyComponent({ renderFn }) { return <>{renderFn()}</> }
Converts a video to ProRes Proxy.
ffmpeg -i input.mp4 -c:v prores_aw -profile:v 0 -pix_fmt yuv422p10le output.mov
With half resolution.
ffmpeg -i input.mp4 -c:v prores_aw -profile:v 0 -pix_fmt yuv422p10le -vf "scale=iw/2:ih/2" output.mov
These instructions may not be 100% accurate on color space and gamma, use with care.
- DaVinci Resolve
- Timeline color management: DaVinci YRGB Color Managed, color processing mode HDR, output color space HDR HLG
- Color add a final node: Color Space Transform effect (Rec.2020/Rec.2100 HLG -> sRGB/Rec.2100 ST2084)
- Deliver: TIFF, RGB 16-bit, color space tag Rec.2020, gamma tag Rec.2100 ST2084
ffmpeg -i davinci.tif -color_primaries bt2020 -color_trc smpte2084 krita.jxl -y
- Tag output with color space Rec.2020, transform function Rec.2100 ST2084 (a.k.a. HLG PQ)
- Open Krita, save as TIFF, check "Store alpha channel (transparency)"
- Will save as TIFF 128-bit
JxrEncApp -i krita.tif -o final.jxr -q 1
- JPEG XR rooted from Windows HD Photo
- Windows, Xbox, and DirectX are using it
- Tool + SDK is open source on Linux as libjxr-tools, but it's quite restrictive and not working most of the time
- Another SDK is a Windows OS component called Windows Imaging Component (WIC)
- Color space is assumed scRGB and gamma is assumed 1.0 (linear)
- scRGB is mostly like sRGB but luminance value ranging from -0.5 to ~7.5, HDR ready
- Windows Photos app supports HDR photos for both JPEG XR, JPEG XL, but HDR photos in TIFF and PNG will show in SDR, not HDR
- Windows Photos can re-save any photos to JPEG XR, but only save as SDR
- A 3P converter app using WIC seems behave the same way, only save as SDR
- ffmpeg is limited on TIFF supports over pixel formats
- SDR pixel formats (<= 8 bit) is generally good, but for HDR, seems only support few pixel formats:
rgb48le
,rgba64le
- libjxr-tools accept quite a lot of pixel formats on HDR
- However, ffmpeg and libjxr-tools don't overlap their support on pixel formats
- SDR pixel formats (<= 8 bit) is generally good, but for HDR, seems only support few pixel formats:
- How to convert HDR photos to JPEG XR
- Given HDR photo in JPEG XL format
- If input is not JPEG XL, (e.g. PNG and TIFF from DaVinci Resolve), use ffmpeg to convert it into JPEG XL
- Must have proper color primaries (Rec.2020) and transfer function (Rec.2100 ST2084 PQ)
- Windows Photos should load the image in HDR
- If input is not JPEG XL, (e.g. PNG and TIFF from DaVinci Resolve), use ffmpeg to convert it into JPEG XL
- Use Krita to convert JPEG XL into TIFF
- Possibly an image with 32-bit per pixel, a 128-bit RGBA floating point image
- ffmpeg do not support this pixel format
- Use libjxr-tools to convert the TIFF into JPEG XR
- Given HDR photo in JPEG XL format
Read related from Reddit.
Color space means everything: color primaries, transfer function, white point, etc.
Color primaries is a conversion map from a series of number (RGB, YUV, CMYK) to an actual color (wavelength). 100% red, 0% green, 0% blue in sRGB and Rec.709 looks very similar, minus the luminance difference due to different transfer function. But 100%/0%/0% in Rec.709 and Rec.2020 is a different red.
Transfer function is a function that convert a number (luminance value) to an actual brightness. It is usually using gamma function but HDR is HLG/PQ. RGB has no explicit luminance value, Y in YUV and K in CMYK is the luminance value.
- sRGB is mostly for computers, effective gamma of 2.2
- Darker part is 1.0 (linear), brighter part is 2.4
- Rec.709 is mostly for SDR, generally pure gamma 2.4
- sRGB and Rec.709 is on the same color space
- Rec.2020 is mostly for HDR, gamma is either HLG or PQ
- Rec.2020 is on different/expanded color space
-color_primaries |
Description |
---|---|
bt709 |
Rec.709 (also same as sRGB) |
bt2020 |
Rec.2020 |
smpte432 |
DCI-P3 (SMPTE-432) |
-color_trc |
Description |
---|---|
iec61966-2-1 |
sRGB |
bt709 |
Rec.709 |
arib-std-b67 |
Rec.2020 HDR HLG |
smpte2084 |
Rec.2020 HDR PQ (ST2084) |
smpte428 |
DCI-P3 (SMPTE-428, same as gamma 2.6) |
linear |
Gamma 1.0 |
-colorspace |
Description |
---|---|
rgb |
RGB (passthrough, for DCI-P3) |
bt709 |
Rec.709 |
bt2020_ncl |
Rec.2020 Non-constant luminance |
bt2020_cl |
Rec.2020 Constant luminance |
Color space | Arguments |
---|---|
sRGB | -color_primaries bt709 -color_trc iec61966-2-1 -colorspace rgb |
Rec.709 | -color_primaries bt709 -color_trc bt709 -colorspace bt709 |
Rec.2020 HLG | -color_primaries bt2020 -color_trc arib-std-b67 -colorspace bt2020_ncl |
Rec.2020 PQ | -color_primaries bt2020 -color_trc smpte2084 -colorspace bt2020_ncl |
DCI-P3 | -color_primaries smpte432 -color_trc smpte428 -colorspace rgb |
- Why Node.js Test Runner is not prime time yet?
- Node.js cannot natively transpile JSX
- React 16 does not using
import
statement butrequire()
, forcing tests to be CommonJS- However, source code is usually ES Modules, thus, need to skip tests for React 16
- Jest to Node.js Test Runner
- Doable
- Move from
jest.fn()
tomock.fn()
- Jest:
expect(fn).toHaveBeenCalledTimes(1)
- Node.js:
expect(fn.mock.calls).toHaveProperty('length', 1)
- Or write a
expect.extends
matcher
- Jest:
- Move from
jest.spyOn(console, 'error')
tomock.method(console, 'error')
- Move from
- Not easily doable, didn't try
- Transform .jsx on-the-fly with
node --experimental-loader
- Transform .jsx on-the-fly with
- Doable
- WireGuard need a good clock, and captive portal could block NTP (port 123) as they assume (without knowing that) most mobile devices have good clock
- Raspberry Pi: Add
maxcpus=1
to/boot/firmware/cmdline.txt
to disable cores
- To temporarily pause, call
reader.releaseLock()
- To signal stop, call
reader.cancel()
- With pending
reader.read()
reader.releaseLock()
will rejects theread()
reader.cancel()
will resolves theread()
withundefined
sudo apt install cups printer-driver-gutenprint
# Auto-start CUPS
sudo systemctl enable --now cups.service
# Enable admin
sudo usermod -a -G lpadmin pi
# Enable remote access
sudo cupsctl --remote-any
# Restart CUPS to save changes
sudo systemctl restart cups
Then, navigate to https://<hostname>:631/
to add a printer. Check "Share this printer".
- Android
- Install Mopria app
- Add printer, hostname is
<hostname>:631/printers/Your_Printer_Name
(nohttp://
)
- Windows
- Add IPP printer with URL of
http://<hostname>:631/printers/Your_Printer_Name
- Add IPP printer with URL of
sudo apt update
# "openresolv" required as Raspberry Pi does not have "resolvconf".
sudo apt install wireguard openresolv
# Import wg0.conf.
# Add "PersistentKeepalive = 25" if the NAT router kills UDP too soon
sudo pico /etc/wireguard/wg0.conf
sudo wg-quick up wg0
sudo systemctl enable --now wg-quick@wg0
Run nmtui
in terminal.
- Testing in browser
- Using
jest.fn
andjest.spyOn
in browser meansimport { fn, spyOn } from 'jest-mock'
under<script type="module">
- Import map is like a much simplified version of
package.json
- Using
- To efficiently spy a function and setup expectation
- Spy:
spyOn(console, 'log')
, no need to assign to a constant - Expectation:
expect(console.log).toHaveBeenCalledTimes(1)
- Spy:
- Bundling in monorepo
- Monorepo with hoisted dependencies means most
/node_modules/
are located at the root of the project - Deploying a package to run in container means the root
/node_modules/
need to be packed- The root
/node_modules/
may contains dependencies used by other packages
- The root
- Bundling helps picking the minimal dependencies need to be packaged to run
- Monorepo with hoisted dependencies means most
- Official emulator does not run on ARM64 yet
- Linux and ARM64 is in preview, https://learn.microsoft.com/en-us/azure/cosmos-db/emulator-linux
- The image is huge (> 2 GB)
EXISTS
is not implemented
- cosmium is an unofficial emulator written in Go
Scenario | Real | Official emulator | cosmium |
---|---|---|---|
Bracket notation | âś… | âś… | âś… |
Bracket notation with parameter | ✅ | ❌ Syntax error | ❌ Return empty |
ARRAY_CONTAINS |
✅ | ✅ | ❌ Return empty |
ARRAY_CONTAINS with parameter |
✅ | ❌ Syntax error | ❌ Return empty |
EXISTS subquery |
✅ | ❌ Not implemented | ✅ |
EXISTS subquery with parameter |
✅ | ❌ Not implemented | ✅ |
batch() |
✅ | ? | ❌ Emulator-side process error |
bulk() |
✅ | ? | ❌ Emulator-side process error |
Code snippets:
- Bracket notation:
WHERE c.map[@name] = @value
ARRAY_CONTAINS
:WHERE ARRAY_CONTAINS(c.array, @value)
EXISTS
:WHERE EXISTS (SELECT p FROM p IN c.array WHERE p = @value)
orderBy=modifiedTime+ASC
is not working properly- It returns data in ascending order, but not from day 0, but some random days
- However, descending is working properly
- Pagination (
limit
/offset
) only works for 90 days of data- It will return end of data after 90 days
- Use
modifiedTime
for pagination instead, i.e.filter=modifiedTime%3C%3D1734773188000
- Pagination (
limit
/offset
) is too naive to use with real-time data- Page overlap logic is required when data is expected to update in real-time
- Customer table does not have
modifiedTime
field despite there is ametadata.modifiedTime
field- Default seems to be order by
customerSince DESC
- Order by
customerSince ASC
will sort it in ascending order - Order by
modifiedTime DESC
will do nothing - Order by
metadata.modifiedTime DESC
will throw HTTP 400
- Order by
- Modifying a customer will not move its position
- Default seems to be order by
- Product item table will update frequently because every order will change item stock
- Webhooks requires OAuth and probably publishing app publicly
- Per documentation, webhooks may not be very reliable
- If server is too busy, it may simply kill the socket connection instead of returning 429
- For HTTP 429, despite throttling is 16 requests per second for token access, the quota does not reset frequently
Template of MyElement.ts
:
export const observedAttributes = Object.freeze(['data-value']); // For HTML sanitizer
export const tagName = 'my-tag-name'; // For HTML sanitizer
type ObservedAttribute = IterableElement<typeof observedAttributes>;
class MyElement extends HTMLElement {
static observedAttributes: readonly string[] = observedAttributes;
}
let defined = false;
export function defineMyElement() {
if (!defined) {
customElements.define(tagName, MyElement);
defined = true;
}
}
// Type-friendly way to create the element.
export function createMyElement(
ownerDocument: Document,
attributesInitDict: Readonly<{ [K in ObservedAttribute]?: string | undefined; }>
): MyElement {
defineMyElement();
const myElement = ownerDocument.createElement(tagName) as MyElement;
myElement.dataset['value'] = attributesInitDict['data-value'];
return myElement;
}
markdown-it
vs.micromark
, microsoft/BotFramework-WebChat#5330micromark
is more like a SAX pipeline- We can't parse Markdown via
mdast
into AST and render it viamicromark
becausemicromark
is SAX and not AST
- We can't parse Markdown via
<input type="hidden">
will not participate in HTML Constraint Validation- Calling
HTMLInputElement.setCustomValidity('Some error')
will not failHTMLFormElement.checkValidity()
- Calling
- Imperative function is "what you would do"
while (speed < 60) { accelerate(); }
- Declarative function is "what you want"
cruiseControl(60);
<video>
is currently the only way to do P-in-P- Steps:
- Create
<canvas>
, no need to attach to DOM videoElement.muted = true
to allow programmatically playvideoElement.srcObject = canvasElement.captureStream()
to play<canvas>
in the<video>
at zero FPS (on-demand)- Draw on canvas
MediaStream.getVideoTracks()[0].requestFrame()
to capture<canvas>
into<video>
await videoElement.play()
to start playing the video again- On
videoElement.timeUpdate
event, callvideoElement.pause()
to pause immediately- This allows browser/device to go to sleep
- On
click
event, callvideoElement.requestPictureInPicture()
, P-in-P requires gesture
- Create
- Managed Identity is a resource running under resource group, similar to App Registrations but running under directory
- Easier to clean up
- 2 ways to authenticate the running code as managed identity: federated identity or running under Azure (with identity assigned)
- Producing token: One resource (say, Web Apps) can be operated under 1+ identities. Which identity to use to talk can be selected.
- Usually, an HTTP token server on localhost:12345 will be able to generate token for code that run under Azure
- Different service use different implementation token server
- Use
new ManagedIdentityCredential({ clientId: 'or process.env.AZURE_CLIENT_ID' }).getToken('https://vault.azure.net')
- https://vault.azure.net is the scope
- A single scope must be set, otherwise, it will conside it's multiple scopes and getting the token will fail
- Consuming token: varies from service to service
- Computer Vision use
Authorization: Bearer eyJ
- Speech SDK use
Authorization: Bearer aad#/subscription/...#eyJ
- Computer Vision use
Each Azure service have their own similar implementation of token server and is only accessible locally on the same box.
export IDENTITY_ENDPOINT=http://localhost:4141/MSI/token
export IDENTITY_HEADER=12345678-1234-5678-abcd-12345678abcd
wget --header "x-identity-header: $IDENTITY_HEADER" $IDENTITY_ENDPOINT?resource=https://vault.azure.net&api-version=2019-08-01
GET /MSI/token?resource=https://vault.azure.net&api-version=2019-08-01 HTTP/1.1
Host: localhost:4141
X-IDENTITY-HEADER: 12345678-1234-5678-abcd-12345678abcd
resource
means scope. For example, Speech Services scope is https://cognitiveservices.azure.com. This is not OIDC, nothing at https://vault.azure.net/.well-known/openid-configuration.
npm install expect mocha sinon --save-dev
+ import { expect } from 'expect';
+ import { fake } from 'sinon';
- test('should work', () => {
+ it('should work', () => {
- const fn = jest.fn();
+ const fn = fake(() => {});
fn(1);
- expect(fn).toHaveBeenCalledTimes(1);
+ expect(fn).toHaveProperty('callCount', 1);
- expect(fn).toHaveBeenCalledNthWith(1, 1);
+ expect(fn.getCall(0)).toHaveProperty('args', [1]);
});
- If a HD/4K UVC is connected via USB 2.0, it will not announce availability of 1920x1080 YUV2 and formats that requires bandwidth of USB 3.1
- VLC is better at controlling audio buffering than ffmpeg/ffplay
Elgato HD60 X is using standard 1920x1080 YUYV (4:2:2) and NV12 (4:2:0), so it is supported by v4l2
without any extra drivers. Tested to work under Raspberry Pi Lite OS (Bookworm) with sudo apt-get install xinit vlc
.
This is more-or-less UVC-to-HDMI converter. Using 1920x1080 (HD) to output to ATEM Mini Pro, should be good for 3840x2160 (4K) as well. So I can play Xbox in 4K while streaming RTMP via ATEM Mini Pro at HD. Essentially bundling Elgato HD60 X and Raspberry Pi 4 together as a HDMI downscalable splitter.
Total latency from Xbox Series X -> Elgato HD60 X -> Raspberry Pi 4 -> ATEM Mini Pro -> RTMP server -> OBS is about 0.5-1 seconds. RTMP is the biggest factor.
Put this under crontab
with @reboot /home/pi/playback.sh
.
cvlc \
v4l2:///dev/video0:width=1920:height=1080:chroma=YUYV &
cvlc \
--audio-desync=12 \
-A alsa \
--alsa-audio-device sysdefault:CARD=vc4hdmi0 \
alsa://hw:CARD=X,DEV=0
- View webcam (UVC) on Raspberry Pi, so it "converts" USB-C webcam into HDMI signal
sudo apt-get install ffmpeg vlc xinit
v4l2-ctl --list-formats-ext
to see what resolution/chroma it support- VLC:
cvlc v4l2:///dev/video0:chroma=H264:width=1920:height=1080
as 1920x1080 with h.264 "chroma" - ffplay:
ffplay /dev/video0 -f v4l2 -input_format h264 -video_size 1920x1080 -vcodec h264_v4l2m2m
- This is decoded via Raspberry Pi hardware decoder (
h264_v4l2m2m
)
- This is decoded via Raspberry Pi hardware decoder (
- Each webcam has different resolution/chroma, for example
- Razer Kiyo Pro output 1920x1080 of h.264 or MJPEG, or 640x360 as YUYV (4:2:2) or NV12 (4:2:0)
- ATEM Mini output 1920x1080 of MJPEG
- Elgato HD60 X output 1280x720 YUYV (4:2:2) or NV12 (4:2:0)
HD and 4K profiles on Elgato HD60 X is not detected by(This is because using a USB 2.0 cable)v4l2
- Both Windows and Android (Xperia 1 V) can use Elgato HD60 X with HD/4K signal of unknown chroma, seems limitation on
v4l2
instead of proprietary chroma/codec
- tsup IIFE
- Will emit
var globalName = (() => { ... });
, not exactly UMD but close - Ignore
external
and will bundle everything because IIFE cannot load other deps via require/import - Need `platform: 'browser' to load "browser" conditions
- Will emit
The following is less performant than the latter.
const MyComponent = memo(...);
const App = ({ children }) => (
<MyComponent>
- <div>{children}<div>
+ {useMemo(() => <div>{children}<div>, [children])}
</MyComponent>
);
This is because the children
props change on every re-render.
- <MyComponent children={<div>{children}</div>} />
+ <MyComponent children={useMemo(() => <div>{children}</div>, [children])} />
- Hoisted vs. non-hoisted
- In non-hoisted, some packages may bring another version of production package directly under root
/node_modules
as transient - The "wrong" version will become more visible and could be mistook by esbuild or Webpack
- In non-hoisted, some packages may bring another version of production package directly under root
splitting
means if it should code-split common parts across 2 entrypoints (true
/undefined
), or just duplicate them (false
)- For React Context, it is important to have a single code, rather than duplicated
- Instead of moving stuff from
dependencies
todevDependencies
, we can also mark a package vianoExternal: ['bundle-this-package']
- Type portability means, all types used in all exported code are exported as well
- If there are types that we don't want to export (internal/private), we should rewrite the type in the exported code so we cut the connection there
dts: true
seems not checking type portability, butexperimentalDts: true
do checkexperimentalDts
or@microsoft/api-extractor
requires thetsconfig.json
to put on the project root, than next to the code inside/src/
- Node.js don't know about
package.json/module
at all, it is a de facto standard used by Webpack et al. only
- Don't use
.cjs
/.mjs
file extension, use.js
only- Otherwise, Webpack in
create-react-app
will consider it an asset file similar to.gif
or.txt
, i.e. returning a string and copied to the asset folder package.json/type
should be the module format of the.js
file referenced by thepackage.json/main
field
- Otherwise, Webpack in
- CSS:
prefers-reduced-motion: reduce
does not stop GIF animation from playing
Easter Eggs = breathe life into product.
EEE, maybe: expected, exceeded, extraordinary.
- Why
0.0.1
is consider less stable?0.0.2
is not picked up bynpm install my-package@^0.0.1
0.0.1
->0.0.2
is considered a major bump, could introduce breaking changes0.0.1
can still be a very high quality build, but it has a tendency to introduce breaking changes in short future, i.e. unstable- Unstable and production ready are two different metrics, they are orthogonal of each other. A version can be both unstable and production ready
- In some perspective:
0.0.1
: "I will break your stuff on next release."- The product is in experimental phase
0.1.0
: "I will add new feature on next release. Bugs could be fixed along with new features."- The product is in exponential growth phase
1.0.0
: "I am okay to pause new work and fix bugs."- The product has full support capacity
0.0.1
and0.1.0
doesn't mean quality issues. It is more about prioritizing release schedule over full support
- Why prevent outsider from running workflow despite the workflow is read-only and use a read-only token?
- Outsider can modify workflow in a pull request and run their own payload
JSON-LD and Schema.org
- Base IRI:
{ "@id": "" }
(empty string) to represent the document base (e.g. the thing that describes the current webpage) - Blank node identifiers:
{ "@id": "_:b1" }
or{ "@id": "_:any-valid-string" }
to represent nodes that appears locally- Blank node identifiers is used for serializing a graph with a cyclic dependencies and flattening
- Nodes that reference other nodes is called blank node and it should only have the
@id
property specified
{ "@type": "@json" }
to mark the data as JSON and keep it as-is during JSON-LD transformation- Otherwise, JSON-LD processor will ignore unknown properties and removed during transformation
- For multiple inheritance, use
{ "@type": ["DigitalDocument", "MediaObject"] }
(with most-recently-defined-wins) - JSON array in JSON-LD is implicitly unordered (a.k.a. set), ordered needs to be explicitly specified (a.k.a. list)
- JSON-LD considers set/list is a special type of map with an indexer
- Flattened vs. embedded graph
- Flattened: all nodes are at top-level and potentially connected using IRIs or blank nodes
- Embedded: nodes can be nested into another node, for referencing other nodes that already exists in the graph, IRIs or blank nodes maybe used
- Schema.org specifics
- Singular vs. plural: both is allowed for all properties. Look at property description if they should be explicitly plural
keywords
is very likely to be pluralfirstAppearance
is very likely to be singular
- Key concepts of
Claim
- "Some data is better than no data."
- Singular vs. plural: both is allowed for all properties. Look at property description if they should be explicitly plural
- Very Google-driven and targeting SEO scenarios
align-items: flex-start
oralign-self: flex-start
will interrupttext-overflow: ellipsis
- Install PiVPN by
curl -L https://install.pivpn.io/ | bash
- Overwrite
/etc/wireguard/wg0.conf
with the client.conf
file (i.e. the content of QR code) - Reboot, done
- Generally, rebroadcast mDNS and proxy TCP 9100 as Brother is applying industry standard for the label printer
- Remote printing to printers on network over Wireguard is possible by simply proxying mDNS and TCP 9100
- Capture mDNS entry (only
ipp
is needed for Android app to work,http
,ipps
andworkstation
is not required) - Proxy (a.k.a. rebroadcast) the mDNS entry
- Proxy the traffic at TCP 9100
- One IP address for one printer only, multiple IP addresses required for multiple printers
On the otherhand, the micro USB port on Brother VC-500W could be simply a USB printer and might be able to expose via CUPS. I didn't explore this area.
If mDNS can be captured over Wireguard (non-multicast network), it could be possible to automatic proxy a printer by simply providing an IP address. I am unsure if mDNS is reachable by simple unicasting.
- Brother VC-500W broadcast itself as an AirPrint printer (
pdl = application/octet-stream,image/jpeg,image/png,image/urf
,kind = roll
,printer-type = 0x904E
) - Android app talks to the printer via AirPrint protocol (
ipp
over port 631, no needipps
,http
andworkstation
) - It broadcasts itself through mDNS over 224.0.0.251:5353
- Despite the mDNS says the printer is listening to TCP port 631, the Android app connects to TCP port 9100
- Azure Container Apps is more-or-less a managed version of Azure Kubernetes Service
- Excerpt: "Container Apps uses the power of the underlying Azure Kubernetes Service (AKS) while removing the complexity of having to work with Kubernetes APIs."
- Azure Container Apps is all about, quickly spin up to handle load (scaler includes HTTP, pull-based events, cron), then slowly reduce replica to zero
- Events must be pull-based (KEDA)
- Number of blobs, but not changes to blobs
- Queue is okay, but not Cosmos DB changes
- Kubernetes style of handling events
- Event Grid does not support pull for events from Azure services
- Event Grid can route events to Azure Queue storage or Azure Event Hubs
- Jobs does not support Dapr (microservices orchestration) and no ingress
- No HTTP, but KEDA
- Can run infinite/continuous process (minimum replica = 1)
- Can deploy from private registry
- Can run Azure Functions by hosting the function on a Docker image, with limited triggers: HTTP, Queue storage, Service Bus, Event Hubs, Kafka. No Cosmos DB and not feature on par
- Azure Container Apps Job did not emit log properly to log workspace
- Once a job trigger is configured, it is not possible to reconfigure it with another trigger
- Takes about 15 seconds to boot and run
- One job resource = one job + one trigger + multiple container (init + job)
Iterator
isnext
, optionalreturn
andthrow
Iterable
is[Symbol.iterator](): { return { next, return, throw } satisfies Iterator<T>; }
Iterator
andIterable
is interchangeable
IterableIterator
=Iterable
+Iterator
={ [Symbol.iterator]() } & { next(), return(), throw() }
Generator
isIterableIterator
with requiredreturn
andthrow
, i.e. all featured and iterable- I/O
- Input:
Array<T>.from(Iterable<T>)
- Output:
new Map<T>().values instanceof IterableIterator<T>
- Input:
- Siblings
Observable
(complete
/error
/next
) vs.Generator
(next
/return
/throw
)Generator
is suspended/on-demand/pull-based, it will not run in background and do not need a worker to drive its dataObservable
is event-based, it requires a worker to drive its data
Observable
vs.EventTarget
EventTarget
is real time. If no one listen to event, dispatched events will be lost.Observable
buffer it until subscriber ready for it- When subscribing to an
EventTarget
, it does not know about it.Observable
knows when someone subscribes to it and normally start a new instance/operation EventTarget
is singleton (one in its world), andObservable
is single instance (many in its world)
Observable
(complete
/error
/next
) vs.ReadableStream
(close
/enqueue
/error
)Observable
is push-based (must have a worker),ReadableStream
can be either or both push-based and pull-based (not having a worker)- When implementing pull-based
ReadableStream
, it has watermark and can be automatically corked by not pulling if watermark is high ReadableStream
can easily tee and perform transformation (N:M transformation)
ReadableStream
vs.Generator
Generator
is easier to write thanks tofunction* ()
syntactic sugars- Say, "after generator is completely iterated, run some logics" is not trivial to build using
ReadableStream
butGenerator
- Say, "after generator is completely iterated, run some logics" is not trivial to build using
- Using for-loop with generator will lose some ability: no return value and exception thrown cannot be caught in generator
try
-finally
in generator will still workyield
infinally
may not work because exception thrown cannot be caught in generator, andyield
infinally
will simply stopfinally
- Maybe refrain from
yield
infinally
- Iterable should generally use with for-loop, which don't call
next
with a value or exposereturn
, thus it isIterator<T>
instead ofIterable<T, TReturn, TNext>
- However, generator natively support return/throw and can become iterable, for-loop-ing a generator may miss some values
Read about Generator return on MDN.
export default class IterableIteratorFromIterator<T, TReturn, TNext> implements IterableIterator<T> {
constructor(iterator: Iterator<T, TReturn, TNext>) {
this.next = iterator.next.bind(iterator);
this.return = iterator.return && iterator.return.bind(iterator);
this.throw = iterator.throw && iterator.throw.bind(iterator);
}
[Symbol.iterator](): IterableIterator<T> {
return this;
}
next: () => IteratorResult<T>;
return?(value?: TReturn): IteratorResult<T, TReturn>;
throw?(e?: any): IteratorResult<T, TReturn>;
}
export default async function* <T>(readableStream: ReadableStream<T>): AsyncIterableIterator<T> & AsyncIterator<T, T> {
const reader = readableStream.getReader();
for (;;) {
const response = await reader.read();
if (response.done) {
return response.value;
}
yield response.value;
}
}
union()
meansT | U
andintersect()
meansT & U
isoTimestamp()
is not quite ISO yet, some improvements could be done- Validating all type of inputs is nice, because people may not use TypeScript to write their integration code
parse()
is great at pumping what's wrong, not great at visualizing the wrongs for human
Interesting read on focus indicator around buttons.
- Sub-orchestration primarily reduce replay cost
- After completed, the "heap" in sub-orchestration will be discarded
- Can be use to reduce replay time in parent orchestration and minimize point of failures
- Orchestration replay time is largely based on 2 things
- High impact: Number of activities executed (total size of activity output)
- Medium impact: Working set size (size of each activity output)
- Each activity output is saved into a TGZ file, many activities executed means downloading many TGZ files, means higher chance of failures
- When an action start, if it failed to download history, it will be timed out after 5 minutes
- Consider extending orchestration session (lingering orchestration) to reduce replay boot time
- No complex logics in sub-orchestration
- Orchestration replay means it promotes deterministic, which also means idempotency (use cache, minimize refetch)
- Don't mess with
isReplaying
, doesn't worth the complexity
- Activity should only run for a short period of time (< 5 minutes)
- Sub-orchestration is the pattern for running longer jobs
- Some tips here
- Large working set (> 64 KB)
- Large working set is saved to storage blob, instead of storage queue
- Waking up orchestrator is literally queueing in storage queue
- Rehydrating large working set in orchestration is prone to failure (task being cancelled)
- If possible, keep large working set in activity and don't output it back to orchestration
Consider using Service Bus to queue HTTP calls that might return as 429. If 429 is received, requeue the message with a schedule based on 429 cooldown period.
- When partition key is same, we could potentially keep documents in the same container
- Documents should not keep in the same container when:
- Change feed is required for certain type of documents
- Not very useful in Azure Functions alone, may work better in Web Apps or Static Web Apps, or Azure Front Door
- The cookie will save on the Azure Functions domain and it requires 3P cookie which is deprecating
- Don't work on local Azure Functions Emulator
- Probably originated from Azure Mobile App Service (Project Zumo)
- Read this, https://learn.microsoft.com/en-us/azure/app-service/configure-authentication-customize-sign-in-out
- To authenticate (via MSAL so I can auth on another domain):
- Use MSAL with scopes of
openid
- Grab the
idToken
from MSAL call - Send it to /.auth/login/aad with
{ "access_token": idToken }
- Should return with
{ "authenticationToken" }
, this is a local token - On every API calls, add
X-ZUMO-AUTH
with the content ofauthenticationToken
- Use MSAL with scopes of
- It works on many scenarios except Server-Sent Events and Web Socket, which headers cannot be altered
- I remember new
fetch()
could now build a Web Socket and passing headers, but I could not find it now @microsoft/fetch-event-source
is outdated and don't like its API signature
- I remember new
- Better queue it up to Azure Service Bus
- Otherwise, when Azure Functions fail, you have no way (or too painful) to retry
- Consider you are working on
{ tags: ['area-ui', 'bug'] }
- While it is easy to add stuff to
tags
without concerning about concurrency, like:{ op: 'add', path: '/tags/-', value: 'area-accessibility' }
,/-
means append - It is difficult to remove stuff because you need index, like:
{ op: 'remove', path: '/tags/2' }
- For concurrency requirements, maybe use another document
These are fun reads:
- https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/model-partition-example
- https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/modeling-data
Fun part: a container can store 2+ types of documents, and they just need to agree on partition key.
For the "add/remove tags" concurrency problem in the "patch operation" section above, could try out these 2 documents in the same container:
{
"id": "u-00001",
"userId": "u-00001", // this is partition key
"type": "user",
"name": "John Doe"
}
{
"id": "t-00001",
"userId": "u-00001", // this is partition key
"type": "tag",
"tag": "bug"
}
Then, when querying the container by userId
, we grab all documents (of type user
and tag
). Then we can turn them back into object model.
database.container('user').items.readAll({ partitionKey: userId }).fetchAll();
If we want to model this object in database:
{
id: 'b-00001',
description: 'Button not working',
tags: ['bugs', 'area-ui']
}
Traditionally, you will write this in relational database:
ID | Description | Tags |
---|---|---|
b-00001 |
Button not working | bugs,area-ui |
Then, you would normalize it into 2 tables:
ID | Description |
---|---|
b-00001 |
Button not working |
ID | Bug ID (FK) | Tag |
---|---|---|
t-00001 |
b-00001 |
bugs |
t-00001 |
b-00001 |
area-ui |
In relational database, you should do 2 queries to get the result for object model.
But in document DB, you would do:
ID | Bug ID (PK) | Type | Description | Tag |
---|---|---|---|---|
b-00001 |
b-00001 |
bug |
Button not working | |
t-00001 |
b-00001 |
tag |
bugs |
|
t-00002 |
b-00001 |
tag |
area-ui |
And you get everything in a single query, while data is normalized.
const iterate = () => ({
[Symbol.asyncIterator]: () => ({
next(): IteratorResult<number> {
// ...
}
}
};
When returning { done: true, value: 123 }
, the value 123
will probably lost if iteration is done through for-loop.
tl;dr not supported, it just buffer up before sending the response body.
Read about MDN Server-Sent Events.
Code snippet tested and the result is buffered.
app.http('...', {
async handler() {
const body: AsyncIterator<Uint8Array> = build(); // Will build a SSE output stream
return { body, contentType: 'text/event-stream' };
}
}
- MSAL is great if you use it the way it's intended
- You can't read "access token" when using
@azure/msal-browser
because you shouldn't access sensitive stuff in browser - Acquire token by redirect is nice, because it auto-remove
#code=
- Orchestration through
yield
by replaying- Clever,
yield
is provingly good for orchestration and pause/resume, seeredux-saga
- Replay is mostly good, except some limitations because replay is not 100% exact
- Clever,
- Don't be lazy: type out activity input/output via
valibot
Type template for valibot
const activityInput = () => object({
id: string(),
name: string()
});
export default activityInput;
export const parseActivityInput = (data: unknown) => Object.freeze(parse(activityInput(), data)); // Or deep freeze
export type ActivityInput = ReadonlyDeep<Output<ReturnType<typeof activityInput>>>;
A package to provide virtualized scrolling to anything. Another Fluent UI Contrib to integrate <DataGrid>
with it.
- Requires JavaScript to set
width
/height
of the container which hold the virtualized viewport - Can't CTRL + F to find stuff (via Fluent UI Contrib)
- Maybe just use CSS
content-visibility: auto
is good enough (no Safari support)
<DataGrid>
has serious performance issues:- Why hovering on 2,000 rows is slow?
- Why sorting 2,000 rows takes seconds?
- Why I need to copy
<DataGrid>
template and impossible to recite? - Why the sample/scaffold/template is not using/encouraging
useCallback
at all?- Why web component devs are not familiar with
useCallback
/useMemo
?
- Why web component devs are not familiar with
- Maybe it's opinionated, but my opinions aren't about their opinions, it's about their facts
- UI is good for desktop, not great for mobile (too small, etc.)
- If you want "write once", it is still okay
Throttling on client-side (Azure Functions-side) using limiter
package.
const limiter = new RateLimiter({ interval: 'second', tokensPerInterval: 50 });
for (const id of idsToRead) {
await limiter.removeTokens(1); // Assume minimum request charge is 1
const result = await database.container('user').item(id).read();
limiter.removeTokens(result.requestCharge - 1); // No need to await, we already spent that charge. Next caller will pause if throttled
yield result.resource;
}
Throttling through Azure Service Bus scheduled enqueue is not great:
- When some operations fell behind, the scheduled time could be passed for more transactions
- A boosting effect will occur (many transactions will be executed at the same time)
- The more transactions to execute, the more likely to get 429, the more likely to fail on the Service Bus processing, the more likely to retry, and more transactions will run again
- It is easy to learn CTRL and CAPSLOCK swap
- It is easy to learn no F1-F12 keys
- You don't move your palm at all and you can be very focused on typing and thinking
- Reduce 80% mouse usage
- To do CTRL + DELETE on a normal keyboard:
- Fn + CTRL + ` won't work
- CTRL + Fn + ` will work
- Not easy at the current moment
/etc
files of Pi-Hole can be huge files (about 1 GB). And some configuration stored inside their DB files (binary)- Some OSS projects attempts to sync. But I think it's non-trivial
rcp
is still good- Editing Pi-Hole configuration online is not very helpful because settings store inside DB files