Skip to content

Feature/webdav sync#189

Open
jdkruzr wants to merge 58 commits intoEthran:mainfrom
jdkruzr:feature/webdav-sync
Open

Feature/webdav sync#189
jdkruzr wants to merge 58 commits intoEthran:mainfrom
jdkruzr:feature/webdav-sync

Conversation

@jdkruzr
Copy link
Contributor

@jdkruzr jdkruzr commented Dec 25, 2025

Outstanding Issues:

  • Needs image/other attachment hashing solution to avoid duplication (requires messing with database, tabling for now)
  • Needs folder deletion/cascade solution (also requires messing with database)

jdkruzr added 30 commits December 11, 2025 15:59
Implements Phase 1 of WebDAV synchronization feature:
- Add dependencies: WorkManager, OkHttp, security-crypto
- Add network permissions (INTERNET, ACCESS_NETWORK_STATE)
- Create SyncSettings data class with sync configuration
- Implement CredentialManager for encrypted credential storage
- Implement WebDAVClient with full WebDAV operations
  - Basic authentication support
  - PROPFIND, PUT, GET, DELETE, MKCOL methods
  - Directory creation and file streaming support
Implements Phase 2 of WebDAV synchronization:
- FolderSerializer: Convert folder hierarchy to/from folders.json
- NotebookSerializer: Convert notebooks/pages/strokes/images to/from JSON
  - Handles manifest.json for notebook metadata
  - Handles per-page JSON with all strokes and images
  - Converts absolute URIs to relative paths for WebDAV storage
  - Supports ISO 8601 timestamps for conflict resolution

Phase 2 complete. Next: SyncEngine for orchestrating sync operations.
Creates skeleton implementations for remaining sync components:

Core Sync Components:
- SyncEngine: Core orchestrator with stub methods for sync operations
- ConnectivityChecker: Network state monitoring (complete)
- SyncWorker: Background periodic sync via WorkManager
- SyncScheduler: Helper to enable/disable periodic sync

UI Integration:
- Add "Sync" tab to Settings UI
- Stub SyncSettings composable with basic toggle

All components compile and have proper structure. Ready to fill in
implementation details incrementally. TODOs mark where logic needs
to be added.
- Fix KvProxy import path (com.ethran.notable.data.db.KvProxy)
- Replace HTTP_METHOD_NOT_ALLOWED with constant 405
- Correct package imports in SyncEngine
Add full-featured sync settings interface with:
- Server URL, username, password input fields
- Test Connection button with success/failure feedback
- Enable/disable sync toggle
- Auto-sync toggle (enables/disables WorkManager)
- Sync on note close toggle
- Manual "Sync Now" button
- Last sync timestamp display
- Encrypted credential storage via CredentialManager
- Proper styling matching app's design patterns

All settings are functional and persist correctly. UI is ready
for actual sync implementation.
showHint takes (text, scope) not (context, text)
Log URL and credentials being used, response codes, and errors
to help diagnose connection issues
Use Dispatchers.IO for network calls (Test Connection, Sync Now).
Switch back to Dispatchers.Main for UI updates using withContext.

Fixes: NetworkOnMainThreadException when testing WebDAV connection
Core sync implementation:
- syncAllNotebooks(): Orchestrates full sync of folders + notebooks
- syncFolders(): Bidirectional folder hierarchy sync with merge
- syncNotebook(): Per-notebook sync with last-write-wins conflict resolution
- uploadNotebook/uploadPage(): Upload notebook data and files to WebDAV
- downloadNotebook/downloadPage(): Download notebook data and files from WebDAV
- Image and background file handling (upload/download)

Database enhancements:
- Add getAll() to FolderDao/FolderRepository
- Add getAll() to NotebookDao/BookRepository

Sync features:
- Timestamp-based conflict resolution (last-write-wins)
- Full page overwrite on conflict (no partial merge)
- Image file sync with local path resolution
- Custom background sync (skips native templates)
- Comprehensive error handling and logging
- Resilient to partial failures (continues if one notebook fails)

Quick Pages sync still TODO (marked in code).
- Remove context parameter from ensureBackgroundsFolder/ensureImagesFolder
- Fix image URI updating (create new Image objects instead of reassigning val)
- Use updatedImages when saving to database
- Handle nullable URI checks properly
Safety Features for Initial Sync Setup:
- forceUploadAll(): Delete server data, upload all local notebooks/folders
- forceDownloadAll(): Delete local data, download all from server

UI:
- "Replace Server with Local Data" button (orange warning)
- "Replace Local with Server Data" button (red warning)
- Confirmation dialogs with clear warnings
- Prevents accidental data loss on fresh device sync

Use cases:
- Setting up sync for first time
- Adding new device to existing sync
- Recovering from sync conflicts
- Resetting sync environment
Log notebook discovery, download attempts, and directory listings
to diagnose sync issues
Features:
- SyncLogger: Maintains last 50 sync log entries in memory
- Live log display in Settings UI (last 20 entries)
- Color-coded: green (info), orange (warning), red (error)
- Auto-scrolls to bottom as new logs arrive
- Clear button to reset logs
- Monospace font for readability

Makes debugging sync issues much easier for end users without
needing to check Logcat.
Fixes:
- forceUploadAll: Delete only notebook directories, not entire /Notable folder
- Add detailed SyncLogger calls throughout force operations
- Add logging to upload/download operations with notebook titles

Log viewer now shows:
- Exactly which notebooks are being uploaded/downloaded
- Success/failure for each notebook
- Number of pages per notebook
- Any errors encountered

This makes debugging sync issues much easier and prevents
accidentally wiping the entire sync directory.
Add auto-sync trigger when switching pages in editor:
- Hook into EditorControlTower.switchPage()
- Pass context to EditorControlTower constructor
- Trigger SyncEngine.syncNotebook() when leaving a page
- Only syncs if enabled in settings
- Runs in background on IO dispatcher
- Logs to SyncLogger for visibility

Now sync-on-close setting is functional.
Show clearly in sync log:
- ↑ Uploading (local newer or doesn't exist on server)
- ↓ Downloading (remote newer)
- Timestamp comparison for each decision
- Which path is taken for each notebook

This will help diagnose why sync only goes up and never down.
Create AppRepository instance to properly access PageRepository
in triggerSyncForPage method.
Previous logic: equal timestamps → upload
New logic: equal timestamps → skip (no changes needed)

Now properly handles three cases:
- Remote newer → download
- Local newer → upload
- Equal → skip

This prevents unnecessary re-uploads when nothing has changed.
Move sync trigger to EditorView's DisposableEffect.onDispose
which fires when navigating away from the editor.

Now sync-on-close actually works when you:
- Navigate back to library
- Switch to a different notebook
- Exit the app

Will show "Auto-syncing on editor close" in sync log.
Use new CoroutineScope instead of composition scope in onDispose.
The composition scope gets cancelled during disposal, causing
"rememberCoroutineScope left the composition" error.

Now sync-on-close will actually complete.
Log when credentials are loaded or missing to help diagnose
AUTH_ERROR issues.
Show full Date.time millisecond values and difference to diagnose
why timestamps that appear equal are being treated as different.

This should reveal if there are sub-second differences causing
unnecessary uploads.
Add null-safe access to remoteUpdatedAt.time
Problem: ISO 8601 serialization loses milliseconds, causing
local timestamps to always be slightly newer (100-500ms).

Solution: Ignore differences < 1 second (1000ms)
- Difference < -1000ms: remote newer → download
- Difference > 1000ms: local newer → upload
- Within ±1 second: no significant change → skip

This prevents unnecessary re-uploads of unchanged notebooks while
still detecting real changes.
After syncing local notebooks, now scans server for notebooks
that don't exist locally and downloads them.

Flow:
1. Sync folders
2. Sync all local notebooks (upload/download/skip)
3. Discover new notebooks on server
4. Download any that don't exist locally

This enables proper bidirectional sync - devices can now discover
and download notebooks created on other devices.
@Ethran
Copy link
Owner

Ethran commented Feb 3, 2026

please add switch to explicitly allow syncing on mobile data. It shouldn't be syncing by default when it's not connected to wifi.

@Ethran
Copy link
Owner

Ethran commented Feb 3, 2026

Also, please provide instructions how to setup syncing if user have 2fa, ie:
Log in via browser
Go to Settings → Security
Under Devices & sessions
Click Create new app password
Give it a name (e.g. notable),
You will then see a username and password created for this app. Use them to setup nextcloud.

@Ethran
Copy link
Owner

Ethran commented Feb 3, 2026

Also, the ui doesn't look great with dark mode:
Screenshot_20260203-105433_Notable
And there should be button to show password that was entered, but only if user just entered it, not the saved password. For saved password I don't see a point in showing anything else then "change password".

@Ethran
Copy link
Owner

Ethran commented Feb 27, 2026

ok, great! I will check it at the evening or tomorrow and let you know if everything is ok.

Also I started major refactoring of codebase, mostly to separate UI from other logic, and so that @Preview annotation will work. I will probably create PR in next few days, it shouldn't effect this too much. I finally discovered why it was so painful to work on UI in notable.

@jdkruzr
Copy link
Contributor Author

jdkruzr commented Feb 27, 2026

ok, great! I will check it at the evening or tomorrow and let you know if everything is ok.

Also I started major refactoring of codebase, mostly to separate UI from other logic, and so that @Preview annotation will work. I will probably create PR in next few days, it shouldn't effect this too much. I finally discovered why it was so painful to work on UI in notable.

this is a very good idea imo. will make maintenance much easier.

@jdkruzr
Copy link
Contributor Author

jdkruzr commented Feb 27, 2026

Also, the ui doesn't look great with dark mode: Screenshot_20260203-105433_Notable And there should be button to show password that was entered, but only if user just entered it, not the saved password. For saved password I don't see a point in showing anything else then "change password".

will see if we can do something about the coloring here as well as updating the password field. side note: the proportions of the screenshot suggest you got a Palma 2 Pro? 1) I love that thing, it's been great, 2) this introduces good excuses to work on multi-screen-size support ;)

@Ethran
Copy link
Owner

Ethran commented Feb 28, 2026

@jdkruzr I looked at the docs, and have some questions, and remarks:

  1. What will happen when two devices try to sync at the exact same time? Can there arise some race conditions, invalid states?
  • what will happen if two devices try to write to deletions.json and folders.json at the same time?
  1. Why you decided to keep track of deletions and modifications using deletions.json and folders.json? I feel like it might cause some nasty race conditions, and I'm not sure if it will be much faster then requesting file list and compering timestamps and in feature ETags? The files deletions.json and folders.json will grow in size very fast.
  2. What will happen on move operation?
  • notebook gets moved to different folder
  • pages in notebook are rearranged
  1. Are you uploading whole notebooks on change, or only changed pages? From docs I understood that you are reuploading whole notebook, which is bad approach. Notebooks can be large, and it does not make sense to reupload them on every change to single page.
  2. Why not make structure as follows:
  • folders and notebooks are translated to folders on server
  • on server each folders has:
    • manifest:
      • what it is? a folder, a notebook, collection of media?
      • data associated with folder/notebook (name, default background etc)
      • list of pages in this entity, to get order of pages present in it
    • pages data, as a files
    • if its a notebook also folders backgrounds and images
    • subfolders if its a folder
  1. Handling of error 405: I wouldn't just ignore it, it means that the client has no idea whats on the server, which can cause some problems.
  2. the docs should be shorter, and contain links in the table of content. The lines number is useless information.
    Why Not CouchDB / Syncthing / Other Sync Frameworks? and why a Custom WebDAV Client are not that important, and can be shorten, moved to the end of file. When you reference file, its nice to give a link to it.

@jdkruzr
Copy link
Contributor Author

jdkruzr commented Feb 28, 2026

@jdkruzr I looked at the docs, and have some questions, and remarks:

  1. What will happen when two devices try to sync at the exact same time? Can there arise some race conditions, invalid states?
  • what will happen if two devices try to write to deletions.json and folders.json at the same time?

devices sync on notebook open/close, not continuously like e.g. Dropbox. since this is a single user system it's extremely unlikely you run into this condition, and it's hard to address without transaction support that you would get from something like server extensions -- and that would defeat the purpose of designing this to be usable with any bog-standard WebDAV server. you could probably make this happen if you actively tried, but you would need to have two devices open at the same time and then close or delete a note at precisely the same time -- hard to even do on purpose, nevermind accidentally!

  1. Why you decided to keep track of deletions and modifications using deletions.json and folders.json? I feel like it might cause some nasty race conditions, and I'm not sure if it will be much faster then requesting file list and compering timestamps and in feature ETags? The files deletions.json and folders.json will grow in size very fast.

a race condition here is still unlikely but distantly possible -- but merging data is idempotent enough that you mostly only run the risk of things appearing to be wrong for a moment before the next sync. the best fix for this problem though is the ETags feature. do you want to move that into this PR after all?

for folders a manifest file makes sense because WebDAV folders don't have metadata like titles or parent relationships. deletions.json on the other hand is needed specifically because PROPFIND can't tell you what "used to exist." if you delete a notebook locally, PROPFIND on the server still shows it — you need to communicate "I intentionally deleted this" to other devices -- deletions.json is an implementation of tombstoning ( https://en.wikipedia.org/wiki/Tombstone_(data_store) ) . timestamps alone can't distinguish "never synced" from "deleted."

re: the JSONs growing fast -- true, I think we should implement a truncation feature for deletions. folders.json shouldn't grow that fast. but it's probably safe to truncate the tombstone file for all records older than, say, 60 days. by that point all your devices should be caught up.

  1. What will happen on move operation?
  • notebook gets moved to different folder

updates parentFolderId on the notebook, which bumps updatedAt. next sync uploads the updated manifest. works correctly with current design.

  • pages in notebook are rearranged

updates pageIds order in the manifest. same mechanism — updatedAt bumps, manifest re-uploads. also works.

  1. Are you uploading whole notebooks on change, or only changed pages? From docs I understood that you are reuploading whole notebook, which is bad approach. Notebooks can be large, and it does not make sense to reupload them on every change to single page.

we do upload individual page files (pages/{pageId}.json), but conflict resolution is at the notebook level (compare manifest updatedAt). so if Device A edits page 3 and Device B edits page 7, whichever device syncs last wins the entire notebook. this is a trade-off: true per-page sync would require per-page timestamps in the manifest and a page-level diff/merge — significantly more complex. given the target use case (personal device, rarely true multi-device concurrent editing), notebook-level LWW is a reasonable V1. we may be able to take advantage of ETags in the future to help this, though.

  1. Why not make structure as follows:
  • folders and notebooks are translated to folders on server

  • on server each folders has:

    • manifest:

      • what it is? a folder, a notebook, collection of media?
      • data associated with folder/notebook (name, default background etc)
      • list of pages in this entity, to get order of pages present in it
    • pages data, as a files

    • if its a notebook also folders backgrounds and images

    • subfolders if its a folder

this is another tradeoff: folders are lightweight metadata (no binary content), so a single file avoids N extra PROPFIND/GET round-trips per folder. if we did it the way you're describing instead, we would incur a cost of potentially many, many HTTP calls to resolve, and WebDAV is notoriously slow at this at the server level.

  1. Handling of error 405: I wouldn't just ignore it, it means that the client has no idea whats on the server, which can cause some problems.

this is actually in conformation with the spec: we only ignore 405 specifically on MKCOL — which is the correct behavior per RFC 4918. MKCOL returns 405 when the collection already exists. This is idempotent and safe.

  1. the docs should be shorter, and contain links in the table of content. The lines number is useless information.
    Why Not CouchDB / Syncthing / Other Sync Frameworks? and why a Custom WebDAV Client are not that important, and can be shorten, moved to the end of file. When you reference file, its nice to give a link to it.

made a bunch of edits to this effect and moved stuff into appendices.

@Ethran
Copy link
Owner

Ethran commented Mar 1, 2026

The main question I think is:
how do others do it? I couldn't find any example of app with similar structure using webdav.

since this is a single user system it's extremely unlikely you run into this condition

If it is possible, it will happen eventually, and possibly in the worst situation. Then debugging it is a pain, as it is not easily reproducible. Is there any option to fix it?

You are right that we have to know somehow that something was deleted. Other then your solution, I see only one option: by ETags. If entity has the ETag from server then it was synced, and should be deleted.

the best fix for this problem though is the ETags feature. do you want to move that into this PR after all?

Not for now, the timestamps are enough to get the implementation working. And I don't get it: how would ETags fix it?

re: the JSONs growing fast -- true, I think we should implement a truncation feature for deletions. folders.json shouldn't grow that fast. but it's probably safe to truncate the tombstone file for all records older than, say, 60 days. by that point all your devices should be caught up.

I don't like this assumption, that after 60 days all the devices will be up to date. If we will truncate this data after 60 days, then if device last sync more then 60 days, it should be forced to check if everything is up to date.

this is another tradeoff: folders are lightweight metadata (no binary content), so a single file avoids N extra PROPFIND/GET round-trips per folder. if we did it the way you're describing instead, we would incur a cost of potentially many, many HTTP calls to resolve, and WebDAV is notoriously slow at this at the server level.

In my proposition I would see it like this:
step 1) run PROPFIND on root folder
step 2) for every subfolder that changed since last sync repeat step 1)
step 3) for every file that changed since last sync, synchronize it

So if we modify page2 (add stroke) in `/folder1/folder2/notebook1/page2' then we would do 4 PROPFIND, and then one GET.

In your case:

  1. GET /notable/folders.json (if exists)
  2. PUT /notable/folders.json
  3. GET /notable/deletions.json
  4. GET manifest.json, of notebook that changed
  5. we know what changed

So there is not that much of a difference. Yours gets more time consuming as there are more notebooks, my if there are a lot of nested folders.

@Ethran
Copy link
Owner

Ethran commented Mar 1, 2026

HTTP 1.1 clients can be good citizens, avoiding overwriting other clients' changes, by using entity tags in If-Match headers with any requests that would modify resources.

http://www.webdav.org/specs/rfc2518.html

The spec support conflict resolution, I would say that every write to server should be behind If-Match if we already know what ETag should be, ie: we just fetched the file, and latter on, we have ETag saved in db.

  1. having folders list in json have one advantage: on moving the notebook into different folder, its easy to update app: we know that it was moved, and not deleted and recreated.

I would say that architecture is ok, I'm not entirely convinced if it is the best we can do. I would really want to see how others are doing it. Also, when we switch to using ETags, should we deprecate 'deletions.json'? If lokal notebook has a ETag (that was fetched from server), but on server it does not exists, then it was deleted.

I will try to slowly go throw code and review the details, starting from storage of singular notebook, as from architecture point of view it most likely won't change -- I don't see any other option for implementing it.

Copy link
Owner

@Ethran Ethran left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I read some files, added some comments.

SyncEngine and WebDaVClient are a quite big, I will need some time to go throw them.

Few remarks:

  • You added SyncLogger class, but only sometimes you are using it, its inconsistent. Should it be this way?
  • Why do you need to write your own serialization? I think that most of it can just reuse what db is doing right now.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove all files from .idea folder from this PR

if (pageId == null) return

try {
val appRepository = com.ethran.notable.data.AppRepository(context)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this function is called frequently, we shouldn't initialize appRepository on every run.

private var history: History,
private val state: EditorState
private val state: EditorState,
private val context: Context
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be consider:
do we need context or appRepository?

page.disposeOldPage()

// Trigger sync on note close if enabled
val settings = GlobalAppSettings.current
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not syncSettings? Do you need other settings then related to sync?

stream.close()
} catch (e: IOException) {
e.printStackTrace()
Log.e(TAG, "Failed to save shared image: ${e.message}", e)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why this change here?

id = folder.id,
title = folder.title,
parentFolderId = folder.parentFolderId,
createdAt = iso8601Format.format(folder.createdAt),
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not just use default time format of db?

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, I see why: the db could change, and we can't migrate the server easily

*/
fun serializeManifest(notebook: Notebook): String {
val manifestDto = NotebookManifestDto(
version = 1,
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this the only change compere to Notebook from db (other then format of time)? Maybe we can inherit from Notebook, add version, and use default serialization?

*/
fun serializePage(page: Page, strokes: List<Stroke>, images: List<Image>): String {
// Serialize strokes with embedded base64-encoded SB1 binary points
val strokeDtos = strokes.map { stroke ->
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, we already have some serialization, and here only difference is encoding of time

*/
object SyncScheduler {

private const val DEFAULT_SYNC_INTERVAL_MINUTES = 5L
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isn't the default 15 min?

import java.util.Locale

@Composable
fun SyncSettings(kv: KvProxy, settings: AppSettings, context: Context) {
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The UI should be decouple from logic, I think that the best practice is to use ViewModel, but I'm not sure yet. I will try to open PR than will clean UI across the app today, but for now my code is a mess, we will see how it will go.

Either way, it should be done so one can use Preview annotation

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

#214

Here you can see where I'm going with, mostly I try to follow best practices for composable functions and decouple UI logic from everything else.

Copy link
Owner

@Ethran Ethran left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another portion of review

return Result.retry()
}

// Check WiFi-only setting (WorkManager constraint already handles this, but be explicit)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is todo left in the code from AI

href.trimEnd('/').substringAfterLast('/')
}
.filter { filename ->
// Only include valid UUIDs
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's seems like comment left by AI

message = "Are you sure you want to delete \"${book!!.title}\"?",
onConfirm = {
bookRepository.delete(bookId)
val deletedNotebookId = bookId
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why to change it?

return packageInfo.versionName
} catch (e: PackageManager.NameNotFoundException) {
e.printStackTrace()
Log.e(TAG, "Package not found: ${e.message}", e)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why to change it?

this.lifecycleScope.launch(Dispatchers.IO) {
reencodeStrokePointsToSB1(this@MainActivity)
}

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move it to separate function, keep the onCreate short for easier navigating


**Important**: Notable will automatically append `/notable` to your server URL to keep your data organized. For example:
- You enter: `https://nextcloud.example.com/remote.php/dav/files/username/`
- Notable creates: `https://nextcloud.example.com/remote.php/dav/files/username/notable/`
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On checking of configuration app should notify user if notable folder already exist -- to prevent situations when user used syncing long time ago, and forgot about it. Simple text like: "found notable data, last updated X days ago", would be enough. You can add more information if you want.

Copy link
Owner

@Ethran Ethran left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another part. I still need to think if the code structure is ok.

* @param wifiOnly If true, only run on unmetered (WiFi) connections
*/
fun enablePeriodicSync(context: Context, intervalMinutes: Long = DEFAULT_SYNC_INTERVAL_MINUTES, wifiOnly: Boolean = false) {
val networkType = if (wifiOnly) NetworkType.UNMETERED else NetworkType.CONNECTED
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be confusing, as networks UNMETERED is different property then wifi. On some devices wifi networks can be marked as metered connection, I'm not sure if it the case for android.

response.close()
} catch (e: Exception) {
// Ignore response close errors
}
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

duplicated code, and please at least log that it failed.

.build()

client.newCall(request).execute().use { response ->
io.shipbook.shipbooksdk.Log.i("WebDAVClient", "Response code: ${response.code}")
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I personally prefer to avoid writing full path of package, and just import it. The code looks nicer this way. Please add imports and change io.shipbook.shipbooksdk.Log to Log

return try {
WebDAVClient(serverUrl, username, password).getServerTime()
} catch (e: Exception) {
null
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

log the failure

notebooksDeleted = deletedCount

// 6. Sync Quick Pages (pages with notebookId = null)
// TODO: Implement Quick Pages sync
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't it implemented already?

private suspend fun applyRemoteDeletions(webdavClient: WebDAVClient): DeletionsData {
SLog.i(TAG, "Applying remote deletions...")

val remotePath = "/$WEBDAV_ROOT_DIR/deletions.json"
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would put the deletions.json into separate variable or wrote a object that would keep all the names on server and return paths.

If it is used in only one place then it can stay that way, but otherwise it would be better to extract this string.

*/
private suspend fun downloadPage(pageId: String, notebookId: String, webdavClient: WebDAVClient) {
// Download page JSON (contains embedded base64-encoded SB1 binary stroke data)
val pageJson = webdavClient.getFile("/$WEBDAV_ROOT_DIR/notebooks/$notebookId/pages/$pageId.json").decodeToString()
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the path generation repeats in multiple places. I would put it into centralized function, so that its easy to change it. It's a loose suggestion, see whats makes sense.

I like to have function for those things, as then I can't make spelling mistakes, changing it is easier. Some times it doesn't make sense, but here I would consider adding object SyncFileStructure and inside functions getNotebookFolderPath('notebookID), getNotebookManifestPath(...), getPageDataPath(notebookId, pageId), etc.

/**
* Detect MIME type from file extension.
*/
private fun detectMimeType(file: File): String {
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't we have such function already somewhere in the code? If not it should be added in some utils file, as we have to detect type in other places also.

@Ethran
Copy link
Owner

Ethran commented Mar 5, 2026

I added some custom instruction to copilot for reviewing code, let's see how well it will work now

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 28 out of 34 changed files in this pull request and generated 9 comments.

Files not reviewed (4)
  • .idea/codeStyles/Project.xml: Language not supported
  • .idea/codeStyles/codeStyleConfig.xml: Language not supported
  • .idea/deploymentTargetSelector.xml: Language not supported
  • .idea/markdown.xml: Language not supported

You can also share your feedback on Copilot code review. Take the survey.

Comment on lines +84 to +89
/**
* Parse ISO 8601 date string to Date object.
*/
private fun parseIso8601(dateString: String): Date {
return iso8601Format.parse(dateString) ?: Date()
}
Copy link

Copilot AI Mar 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

parseIso8601() can throw if the server sends a malformed timestamp (SimpleDateFormat.parse throws ParseException). Since this data is remote/untrusted, this can abort the entire folder sync. NotebookSerializer.parseIso8601() already guards with try/catch; consider matching that pattern here (return a fallback Date or surface a controlled sync error) to keep sync robust.

Copilot uses AI. Check for mistakes.
Comment on lines +494 to +498
`SyncScheduler` enqueues a `PeriodicWorkRequest` with:
- Default interval: 5 minutes (configurable).
- Network constraint: `NetworkType.CONNECTED` (won't run without network).
- Policy: `ExistingPeriodicWorkPolicy.KEEP` (doesn't restart if already scheduled).

Copy link

Copilot AI Mar 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The WorkManager details here don’t match the current implementation: SyncScheduler uses ExistingPeriodicWorkPolicy.UPDATE (not KEEP), and SyncSettings defaults to a 15-minute interval (WorkManager minimum), while this text says 5 minutes. Please align these bullets with SyncScheduler.kt and SyncSettings defaults so the technical doc reflects real behavior.

Copilot uses AI. Check for mistakes.
<DropdownSelection timestamp="2025-12-19T23:05:28.223853994Z">
<Target type="DEFAULT_BOOT">
<handle>
<DeviceId pluginId="PhysicalDevice" identifier="serial=7b735d3b" />
Copy link

Copilot AI Mar 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This file embeds a physical device identifier (serial=...) in the repo. Deployment target selections are user/machine-specific and can leak identifiers; please revert/remove this serialized device ID (and consider excluding deploymentTargetSelector.xml from version control if possible).

Suggested change
<DeviceId pluginId="PhysicalDevice" identifier="serial=7b735d3b" />
<DeviceId pluginId="PhysicalDevice" />

Copilot uses AI. Check for mistakes.
Comment on lines +304 to +311
// Get sync settings and credentials
val settings = kvProxy.get(APP_SETTINGS_KEY, AppSettings.serializer())
?: return@withContext SyncResult.Failure(SyncError.CONFIG_ERROR)

if (!settings.syncSettings.syncEnabled) {
return@withContext SyncResult.Success
}

Copy link

Copilot AI Mar 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

uploadDeletion() doesn’t enforce the WiFi-only setting (syncSettings.wifiOnly). As written, a notebook delete can trigger network calls over mobile data even when the user enabled “Sync on WiFi only”. Consider applying the same WiFi-only gating logic used in syncAllNotebooks()/syncNotebook() (or explicitly documenting that deletions/force ops bypass WiFi-only).

Copilot uses AI. Check for mistakes.
val credentialManager = CredentialManager(applicationContext)
if (!credentialManager.hasCredentials()) {
Log.w(TAG, "No credentials stored, skipping sync")
return Result.failure()
Copy link

Copilot AI Mar 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Returning Result.failure() when credentials are missing will put the periodic Work into a FAILED state; in WorkManager this can prevent future periodic runs even after the user later saves credentials. Since this is a non-transient “skip” condition, prefer Result.success() (or cancel/disable the periodic work) so periodic sync can resume automatically when credentials are configured.

Suggested change
return Result.failure()
return Result.success()

Copilot uses AI. Check for mistakes.
Comment on lines +222 to +226
## Privacy & Security

- **Credentials**: Stored securely in Android's CredentialManager (encrypted storage)
- **Data in transit**: Uses HTTPS for secure communication (recommended)
- **Data at rest**: Depends on your WebDAV provider's security
Copy link

Copilot AI Mar 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This guide says credentials are stored in Android’s CredentialManager, but the implementation stores them in EncryptedSharedPreferences via androidx.security.crypto (see CredentialManager.kt). Please update the documentation to reflect the actual storage mechanism to avoid misleading users evaluating security properties.

Copilot uses AI. Check for mistakes.
Comment on lines +71 to +72
val (settings, webdavClient) = initializeSyncClient()
?: return@withContext SyncResult.Failure(SyncError.CONFIG_ERROR)
Copy link

Copilot AI Mar 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If initializeSyncClient() returns null, syncAllNotebooks() returns early with CONFIG_ERROR after already setting SyncState.Syncing. That code path never updates syncState back to Idle/Error, so the UI can get stuck showing “syncing” even though the mutex is unlocked. Before returning early, set an appropriate SyncState (e.g., Error(CONFIG_ERROR) or Idle) to keep state machine consistent.

Suggested change
val (settings, webdavClient) = initializeSyncClient()
?: return@withContext SyncResult.Failure(SyncError.CONFIG_ERROR)
val initResult = initializeSyncClient()
if (initResult == null) {
updateState(
SyncState.Error(
error = SyncError.CONFIG_ERROR,
step = SyncStep.INITIALIZING,
canRetry = false
)
)
return@withContext SyncResult.Failure(SyncError.CONFIG_ERROR)
}
val (settings, webdavClient) = initResult

Copilot uses AI. Check for mistakes.
Comment on lines +201 to +205
suspend fun syncNotebook(notebookId: String): SyncResult = withContext(Dispatchers.IO) {
return@withContext try {
SLog.i(TAG, "Syncing notebook: $notebookId")

// Get sync settings and credentials
Copy link

Copilot AI Mar 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

syncNotebook() does not acquire the global syncMutex that syncAllNotebooks() uses. This allows sync-on-close / page-switch syncs to run concurrently with a full sync (or other single-notebook syncs), which can lead to overlapping WebDAV operations and concurrent local DB writes. Consider using the same mutex (tryLock + SYNC_IN_PROGRESS) for syncNotebook() (and similarly for other entry points like uploadDeletion/force ops) to guarantee single-device sync exclusivity as intended by the design docs.

Copilot uses AI. Check for mistakes.
Comment on lines +201 to +212
/**
* Convert absolute file URI to relative path for WebDAV storage.
* Example: /storage/emulated/0/Documents/notabledb/images/abc123.jpg -> images/abc123.jpg
*/
private fun convertToRelativeUri(absoluteUri: String?): String {
if (absoluteUri == null) return ""

// Extract just the filename and parent directory
val file = File(absoluteUri)
val parentDir = file.parentFile?.name ?: ""
val filename = file.name

Copy link

Copilot AI Mar 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

convertToRelativeUri() returns an empty string when the Image uri is null, and ImageDto.uri is non-nullable. This means downloaded images can end up stored with uri="" (non-null), and uploadPage() checks only image.uri != null, which would treat an empty string as a real path and attempt File("") operations. Consider making ImageDto.uri nullable (String?) and preserving nulls (or consistently treating blank as null on both serialize/deserialize + upload).

Copilot uses AI. Check for mistakes.
@Ethran
Copy link
Owner

Ethran commented Mar 5, 2026

@jdkruzr please answer questions about architecture ( #189 (comment) ).

The code requires a lot of cleaning, the settings UI will have to be changed a lot, but it's not a priority right now -- after merging the #214 (I hope to complete it this weekend). Mostly the issue with current approach is that it mixes the UI and other logic.

Another change, depending on #214 . I think that moving forward it would be better use hilt:
https://developer.android.com/training/dependency-injection/hilt-android
I already started migrating to it, and it seems to be the right choice.

For now, I would ask you to check the review comments, and focus on improving code quality of purely sync related stuff. I think that in next two weeks it can be merged. And please, fix the race condition with deleted.json.

After currently requested changes will be made, I will probably do some more cleaning etc, so you can expect some commits from me.

@jdkruzr jdkruzr force-pushed the feature/webdav-sync branch from d7ee456 to 3363131 Compare March 7, 2026 22:59
@jdkruzr
Copy link
Contributor Author

jdkruzr commented Mar 7, 2026

ok, I asked Claude to collate and describe all the changes made in response to this feedback:

Change log (what was done and why, by comment)

Copilot 2889632453: Result.failure() when credentials missing (SyncWorker.kt)

Comment: Returning Result.failure() when there are no credentials means WorkManager marks the task as permanently failed and may stop retrying. The right behaviour here is Result.success() — no credentials is a configuration state, not a transient error. We successfully decided not to sync; that's a success from WorkManager's point of view.

Change: Result.failure()Result.success() at the no-credentials check.


Ethran 2876776354: AI-written comment in SyncWorker.kt ("There is todo left in the code from AI")

What was there: // Check WiFi-only setting (WorkManager constraint already handles this, but be explicit) — reads like LLM hedging, and the parenthetical is also wrong (WorkManager constraints fire at scheduling time, not execution time, so the in-process check is genuinely needed).
Change: Removed the comment. The code speaks for itself.


Ethran 2869219760 + Copilot 2646771705: SyncScheduler default interval 5 → 15 minutes

Comments: Ethran: "isn't the default 15 min?" Copilot: "WorkManager enforces a minimum periodic work interval of 15 minutes."
Change: DEFAULT_SYNC_INTERVAL_MINUTES = 5L15L. Added comment explaining the WorkManager minimum.


Ethran 2869176780 + 2884443502: NetworkType.UNMETERED vs WiFi (SyncScheduler + ConnectivityChecker)

Comments: Ethran on ConnectivityChecker: "consider also checking if connection is metered: NET_CAPABILITY_NOT_METERED." Ethran on SyncScheduler: "UNMETERED is different property then wifi."
Why: NetworkType.UNMETERED (WorkManager's constraint) maps to NET_CAPABILITY_NOT_METERED, which covers WiFi and ethernet but excludes metered mobile data. The original isWiFiConnected() used TRANSPORT_WIFI — stricter than UNMETERED (would reject ethernet). Changed ConnectivityChecker to use NET_CAPABILITY_NOT_METERED so the in-process check is consistent with the WorkManager constraint, and renamed the method isUnmeteredConnected() to match. Also added a comment in SyncScheduler explaining what UNMETERED means.


Copilot 2889632378: parseIso8601() missing try/catch in FolderSerializer.kt

Comment: "parseIso8601() can throw if the server sends a malformed timestamp."
Change: Added try/catch returning Date() as fallback, matching the same pattern already used in NotebookSerializer.parseIso8601().


Ethran 2869196995: FolderDto duplicates Folder entity fields (FolderSerializer.kt)

Comment: "Its duplication of code in data.db.Folder.kt lines 25-35." (Ethran later self-answered at 2871493368: "ok, I see why: the db could change, and we can't migrate the server easily.")
Why documented, not fixed: FolderDto is a deliberate DTO boundary. The Room entity uses Java Date for timestamps and carries Room-specific annotations; FolderDto uses plain String fields so kotlinx.serialization can handle it without a custom serializer. Added a comment explaining this so future readers don't "clean it up" by collapsing them.


Copilot 2889632524: ImageDto.uri: String should be nullable (NotebookSerializer.kt)

Comment: "convertToRelativeUri() returns an empty string when the Image uri is null, and ImageDto.uri is non-nullable."
Change: ImageDto.uri changed to String?. convertToRelativeUri() now returns null for null input instead of "".


Ethran 2869158428: appRepository created on every call (EditorControlTower.kt)

Comment: "If this function is called frequently, we shouldn't initialize appRepository on every run."
Change: Moved appRepository to a class field initialized once at construction. Also added proper import com.ethran.notable.data.AppRepository.


Copilot 2646771728: deletedNotebookId = bookId redundant (NotebookConfig.kt)

Comment: "The deletedNotebookId variable is redundant."
Change: Removed intermediate variable, use bookId directly. Also added import com.ethran.notable.sync.SyncEngine instead of inline full package path.


Ethran 2876813450: Move sync init block in MainActivity

Comment: "Move it to separate function, keep the onCreate short for easier navigating."
Change: Extracted to private suspend fun triggerInitialSync(). Also added import com.ethran.notable.sync.SyncEngine.


Ethran 2869150557 + Copilot 2889632415: .idea/ files in git

Comments: Ethran: "remove all files from .idea folder from this PR." Copilot flagged deploymentTargetSelector.xml containing a device serial number.
Change: git rm --cached all 16 tracked .idea/ files. Updated .gitignore to exclude /.idea/ entirely rather than individual subdirectories.


Ethran 2884550860 + 2884629349: Centralized path construction (SyncPaths.kt)

Comments: "I would put the deletions.json into separate variable" and "the path generation repeats in multiple places. I would put it into centralized function."
Change: Created SyncPaths.kt — an object with functions for every server path (rootDir(), notebookDir(id), manifestFile(id), tombstone(id), etc.). All path construction in SyncEngine.kt now uses SyncPaths instead of inline string concatenation.


Ethran 2884502328: Full package path for Log in WebDAVClient.kt

Comment: "please add imports and change io.shipbook.shipbooksdk.Log to Log."
Change: Added import io.shipbook.shipbooksdk.Log, replaced all full-path calls with Log.x().


Ethran 2884488254: Duplicated code / log failure in WebDAVClient.exists()

Comment: "duplicated code, and please at least log that it failed."
Change: Added Log.w(TAG, "exists($path) check failed: ${e.message}") in the catch block.


Ethran 2884521054: Log failure in WebDAVClient.testConnection()

Comment: "log the failure."
Change: Added Log.e(TAG, "Connection test failed: ${e.message}", e).


Ethran 2876783312: AI debug comment in WebDAVClient.kt

Comment: "It's seems like comment left by AI." (referring to // DEBUG: Log the raw response block)
Change: Removed the debug logging block entirely. Also added isValidUuid() helper and listCollectionWithMetadata() method (needed for tombstone conflict resolution).


Ethran 2884643178: UUID detection function

Comment: "Don't we have such function already somewhere in the code? If not it should be added in some utils file."
Change: Added private fun isValidUuid(name: String): Boolean in WebDAVClient.kt for filtering PROPFIND results. It's private to WebDAVClient since that's the only call site.


Copilot 2889632490: initializeSyncClient() null leaves state stuck (SyncEngine.kt)

Comment: "If initializeSyncClient() returns null, syncAllNotebooks() returns early with CONFIG_ERROR after already setting SyncState.Syncing. That code path never updates syncState back to Idle/Error."
Change: initializeSyncClient() now calls updateState(SyncState.Error(...)) before returning null.


Copilot 2889632511: syncNotebook() doesn't acquire mutex (SyncEngine.kt)

Comment: "syncNotebook() does not acquire the global syncMutex... allows sync-on-close syncs to run concurrently with a full sync."
Change: Split into public syncNotebook() (checks syncMutex.isLocked, skips if full sync running) and private syncNotebookImpl() (called from syncExistingNotebooks() which already runs inside the mutex).


Copilot 2889632434: uploadDeletion() doesn't enforce WiFi-only (SyncEngine.kt)

Comment: "uploadDeletion() doesn't enforce the WiFi-only setting."
Change: Added wifiOnly check at the top of uploadDeletion(), matching the same check in syncAllNotebooks() and syncNotebook().


Tombstone implementation (replaces deletions.json)

Not a single comment — addresses the race condition discussion across multiple comments, especially Ethran's architecture question and 2869185228 on DeletionsSerializer.kt.
Changes:

  • uploadDeletion(): PUT zero-byte file to tombstones/{id} instead of read-modify-write on deletions.json
  • applyRemoteDeletions(): PROPFIND tombstones/ with metadata; uses server lastModified for conflict resolution
  • detectAndUploadLocalDeletions(): PUT tombstones instead of updating deletions.json
  • uploadNotebook(): removes stale tombstone on resurrection
  • ensureServerDirectories(): creates tombstones/ directory
  • migrateDeletionsJsonToTombstones(): one-time migration from old format

Tombstone pruning

Not a PR comment — added during our session to close the known limitation from the technical doc (was listed as future work item #4).
Change: After processing tombstones in applyRemoteDeletions(), delete any with lastModified older than 90 days. Tombstones with null lastModified are kept. Constant: TOMBSTONE_MAX_AGE_DAYS = 90L.


Ethran 2869185228: DeletionsSerializer.kt cleanup

Comment: "Hmm? If it's already deprecated, remove it." (about @Deprecated on deletedNotebookIds)
What we did vs what was asked: Ethran said remove the field. We kept it because migrateDeletionsJsonToTombstones() needs to read old deletions.json files that may contain deletedNotebookIds. Without the field, deserialization of legacy data would silently drop those entries. Instead: removed the @Deprecated annotation (the whole class is now migration-only, so "deprecated" no longer makes sense), removed the unused pruned() method, and updated KDoc to explain the class exists solely for migration.


Copilot 2889632399 + 2889632475: Documentation fixes

Comments: "WorkManager details here don't match the current implementation" (technical doc). "This guide says credentials are stored in Android's CredentialManager" (user doc).
Changes in webdav-sync-technical.md: Updated sync protocol steps 3 and 6 for tombstones; updated directory structure; replaced deletions.json section with tombstone description; updated conflict resolution; fixed enum (removed SERVER_ERROR/CONFLICT_ERROR); fixed default interval; updated future work. Version bumped to 1.2.
Changes in webdav-sync-user.md: Updated data format tree; fixed credential storage description from "CredentialManager" to "EncryptedSharedPreferences (AES-256-GCM)". Version bumped to 1.1.


GitHub responses needed (no code changes, just replies)

Ethran 2869166765: "Why this change here?" (share.kt line 36)

Replaced e.printStackTrace() with Log.e(TAG, ...) for consistent structured logging. printStackTrace() writes to stderr which doesn't route through ShipBook; Log.e() does. This was part of a logging cleanup pass across the branch (commit 2ea67fb).

Ethran 2876795673: "Why to change it?" (versionChecker.kt line 142)

Same as share.kt — e.printStackTrace()Log.e() for consistent structured logging through ShipBook.

Ethran 2876789699: "Why to change it?" (NotebookConfig.kt line 125)

This adds the uploadDeletion() call so that when a user deletes a notebook from the UI, the deletion is immediately pushed to the server (as a tombstone) without waiting for the next periodic sync. The deletedNotebookId variable that Copilot flagged as redundant has also been removed.

Ethran 2869164406: "why not syncSettings?" (EditorView.kt line 158)

Good point — we only read syncSettings from the result. Could narrow it to val syncSettings = GlobalAppSettings.current.syncSettings. Low priority since it's just two property accesses either way, but cleaner.


Not addressed (may need future work)

  • Ethran 2876853061 (webdav-sync-user.md line 50): "app should notify user if notable folder already exists" — UI feature, deferred.
  • Ethran 2722904083 (webdav-sync-user.md line 143): More descriptive error messages ("url does not exist", "wrong password/username") — requires server response parsing, deferred.
  • Ethran 2722877062 (webdav-sync-user.md line 57): "it's a gear wheel" — the user doc says "Settings (gear wheel icon)" which matches; no change needed.

Response to architecture questions (March 1 comment)

Prior art: One of the few applications similar to Notable that does WebDAV sync is Saber, so I looked carefully at how it implements WebDAV sync. Their approach is instructive: lib/data/nextcloud/saber_syncer.dart. Key design:

  • No shared aggregation files at all. Pure PROPFIND for discovery.
  • Zero-byte tombstones for deletions: when a file is deleted, they PUT a 0-byte file to the same remote path. Any device that sees size == 0 on a remote file knows it was deleted. No shared deletions.json that multiple devices can race to write.
  • Last-write-wins on lastModified timestamps; no ETags used.

Their approach eliminates the race condition entirely because there are no shared mutable aggregation files — each file write is independent.

Notable can't fully adopt this because we have structured metadata with no equivalent in Saber's model (folder hierarchy, page ordering, notebook titles). That's a real trade-off. But we can adopt zero-byte tombstones to eliminate deletions.json, which removes the concrete race condition you flagged. The note in the code will flag ETags as the proper long-term solution per RFC 2518, since they'd allow If-Match-guarded writes on the remaining shared files (folders.json).

On folders.json: the race condition there is the same structural problem. I'm documenting it but leaving it for after ETags land — fixing it without server-enforced atomicity would require more complex retry logic than is worth it for V1.

ETags deprecating deletions.json: Yes, exactly right — once we store ETags locally, a missing remote file with a known ETag unambiguously means "deleted on server." Zero-byte tombstones are a bridge to that world; they solve the problem now without requiring ETag support. For what it's worth, it does look like strong ETags are more or less universally supported in most popular WebDAV servers, so taking advantage of them as a feature eventually shouldn't be problematic from a "will the server support this" standpoint.

@Ethran
Copy link
Owner

Ethran commented Mar 8, 2026

The main branch has to be merged into this, that will require:

  • sync class should make use of Hilt dependency injection, it will clean up logic a lot.
  • in SyncSettingsTab.kt logic should be moved to ViewModel, and SyncSettingsTab.kt should be responsible only for UI.
  • removing files from .idea from this commit.

If you will be using AI to do this, please add instruction to follow best programing practices for Kotlin Jetpack compose, and in general for Kotlin development.

@Ethran
Copy link
Owner

Ethran commented Mar 8, 2026

Change: DEFAULT_SYNC_INTERVAL_MINUTES = 5L → 15L. Added comment explaining the WorkManager minimum.

Are you sure that now it is consistent throughout the code?

git rm --cached all 16 tracked .idea/ files. Updated .gitignore to exclude /.idea/ entirely rather than individual subdirectories.

PLEASE check the AI response before sending PR.

Copy link
Owner

@Ethran Ethran left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, when this version will get merged into main branch, then we will consider it a 1.0 version, so it shouldn't contain any already deprecated code, or migrations from older development versions.

I'm not sure what the AI did from the comments, but I suspect that it tried to do migration instead just abounding the old approach.

And I almost forgot: we still should use If-Mach for updating the files.
#189 (comment)

└── PUT /notable/folders.json (merged result)

3. APPLY REMOTE DELETIONS
├── PROPFIND /notable/tombstones/ (Depth 1) → list of tombstone files with lastModified
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not just name the folder deletions?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants