-
Notifications
You must be signed in to change notification settings - Fork 269
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore(gui-client/linux): export all logs, not just app logs #4830
Changes from 14 commits
a4ac76b
038471d
f338d7d
dd5a159
9b02495
8bad274
a8416d5
5c1eb62
5eecf7f
a0c5a59
89a7b22
8fe4cb1
0339dcf
b231d7c
17ac1eb
f8bad0b
287c218
40e2d43
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -73,20 +73,6 @@ pub(crate) struct Managed { | |
pub inject_faults: bool, | ||
} | ||
|
||
impl Managed { | ||
#[cfg(debug_assertions)] | ||
/// In debug mode, if `--inject-faults` is passed, sleep for `millis` milliseconds | ||
pub async fn fault_msleep(&self, millis: u64) { | ||
if self.inject_faults { | ||
tokio::time::sleep(std::time::Duration::from_millis(millis)).await; | ||
} | ||
} | ||
|
||
#[cfg(not(debug_assertions))] | ||
/// Does nothing in release mode | ||
pub async fn fault_msleep(&self, _millis: u64) {} | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The fault injection tests weren't used anywhere. I really like the idea but the code is simpler without them. If anyone has a slow hard disk and runs into issues with that, we can deal with it then. |
||
} | ||
|
||
// TODO: Replace with `anyhow` gradually per <https://github.com/firezone/firezone/pull/3546#discussion_r1477114789> | ||
#[cfg_attr(target_os = "macos", allow(dead_code))] | ||
#[derive(Debug, thiserror::Error)] | ||
|
@@ -379,6 +365,10 @@ async fn smoke_test(ctlr_tx: CtlrTx) -> Result<()> { | |
}) | ||
.await | ||
.context("Failed to send ExportLogs request")?; | ||
ctlr_tx | ||
.send(ControllerRequest::ClearLogs) | ||
.await | ||
.context("Failed to send ClearLogs request")?; | ||
Comment on lines
+362
to
+365
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The results are not checked anywhere, but the code is exercised. |
||
|
||
// Give the app some time to export the zip and reach steady state | ||
tokio::time::sleep_until(quit_time).await; | ||
|
@@ -473,6 +463,8 @@ fn handle_system_tray_event(app: &tauri::AppHandle, event: TrayMenuEvent) -> Res | |
pub(crate) enum ControllerRequest { | ||
/// The GUI wants us to use these settings in-memory, they've already been saved to disk | ||
ApplySettings(AdvancedSettings), | ||
/// Only used for smoke tests | ||
ClearLogs, | ||
Disconnected, | ||
/// The same as the arguments to `client::logging::export_logs_to` | ||
ExportLogs { | ||
|
@@ -594,6 +586,9 @@ impl Controller { | |
"Applied new settings. Log level will take effect at next app start." | ||
); | ||
} | ||
Req::ClearLogs => logging::clear_logs_inner() | ||
.await | ||
.context("Failed to clear logs")?, | ||
Req::Disconnected => { | ||
tracing::info!("Disconnected by connlib"); | ||
self.sign_out().await?; | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -7,7 +7,12 @@ use crate::client::{ | |
use anyhow::{bail, Context, Result}; | ||
use connlib_client_shared::file_logger; | ||
use serde::Serialize; | ||
use std::{fs, io, path::PathBuf, result::Result as StdResult, str::FromStr}; | ||
use std::{ | ||
fs, io, | ||
path::{Path, PathBuf}, | ||
result::Result as StdResult, | ||
str::FromStr, | ||
}; | ||
use tokio::task::spawn_blocking; | ||
use tracing::subscriber::set_global_default; | ||
use tracing_log::LogTracer; | ||
|
@@ -21,10 +26,15 @@ pub(crate) struct Handles { | |
pub _reloader: reload::Handle<EnvFilter, Registry>, | ||
} | ||
|
||
struct LogPath { | ||
/// Where to find the logs on disk | ||
src: PathBuf, | ||
/// Where to store the logs in the zip | ||
dst: PathBuf, | ||
} | ||
|
||
#[derive(Debug, thiserror::Error)] | ||
pub(crate) enum Error { | ||
#[error("Couldn't compute our local AppData path")] | ||
CantFindLocalAppDataFolder, | ||
#[error("Couldn't create logs dir: {0}")] | ||
CreateDirAll(std::io::Error), | ||
#[error("Log filter couldn't be parsed")] | ||
|
@@ -36,8 +46,8 @@ pub(crate) enum Error { | |
} | ||
|
||
/// Set up logs after the process has started | ||
pub(crate) fn setup(log_filter: &str) -> Result<Handles, Error> { | ||
let log_path = log_path()?; | ||
pub(crate) fn setup(log_filter: &str) -> Result<Handles> { | ||
let log_path = app_log_path()?.src; | ||
|
||
std::fs::create_dir_all(&log_path).map_err(Error::CreateDirAll)?; | ||
let (layer, logger) = file_logger::layer(&log_path); | ||
|
@@ -73,8 +83,8 @@ pub(crate) fn debug_command_setup() -> Result<(), Error> { | |
} | ||
|
||
#[tauri::command] | ||
pub(crate) async fn clear_logs(managed: tauri::State<'_, Managed>) -> StdResult<(), String> { | ||
clear_logs_inner(&managed).await.map_err(|e| e.to_string()) | ||
pub(crate) async fn clear_logs() -> StdResult<(), String> { | ||
clear_logs_inner().await.map_err(|e| e.to_string()) | ||
} | ||
|
||
#[tauri::command] | ||
|
@@ -97,21 +107,22 @@ pub(crate) async fn count_logs() -> StdResult<FileCount, String> { | |
/// | ||
/// This includes the current log file, so we won't write any more logs to disk | ||
/// until the file rolls over or the app restarts. | ||
pub(crate) async fn clear_logs_inner(managed: &Managed) -> Result<()> { | ||
let mut dir = tokio::fs::read_dir(log_path()?).await?; | ||
while let Some(entry) = dir.next_entry().await? { | ||
tokio::fs::remove_file(entry.path()).await?; | ||
pub(crate) async fn clear_logs_inner() -> Result<()> { | ||
for log_path in log_paths()?.into_iter().map(|x| x.src) { | ||
let mut dir = tokio::fs::read_dir(log_path).await?; | ||
while let Some(entry) = dir.next_entry().await? { | ||
tokio::fs::remove_file(entry.path()).await?; | ||
} | ||
} | ||
|
||
managed.fault_msleep(5000).await; | ||
Ok(()) | ||
} | ||
|
||
/// Pops up the "Save File" dialog | ||
pub(crate) fn export_logs_inner(ctlr_tx: CtlrTx) -> Result<()> { | ||
let now = chrono::Local::now(); | ||
let datetime_string = now.format("%Y_%m_%d-%H-%M"); | ||
let stem = PathBuf::from(format!("connlib-{datetime_string}")); | ||
let stem = PathBuf::from(format!("firezone_logs_{datetime_string}")); | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Changed the default zip name to match the macOS Client. I could also make it default to the Documents folder if that's worth it for UX. |
||
let filename = stem.with_extension("zip"); | ||
let Some(filename) = filename.to_str() else { | ||
bail!("zip filename isn't valid Unicode"); | ||
|
@@ -145,30 +156,58 @@ pub(crate) async fn export_logs_to(path: PathBuf, stem: PathBuf) -> Result<()> { | |
spawn_blocking(move || { | ||
let f = fs::File::create(path).context("Failed to create zip file")?; | ||
let mut zip = zip::ZipWriter::new(f); | ||
// All files will have the same modified time. Doing otherwise seems to be difficult | ||
let options = zip::write::FileOptions::default(); | ||
let log_path = log_path().context("Failed to compute log dir path")?; | ||
for entry in fs::read_dir(log_path).context("Failed to `read_dir` log dir")? { | ||
let entry = entry.context("Got bad entry from `read_dir`")?; | ||
let Some(path) = stem.join(entry.file_name()).to_str().map(|x| x.to_owned()) else { | ||
bail!("log filename isn't valid Unicode") | ||
}; | ||
zip.start_file(path, options) | ||
.context("`ZipWriter::start_file` failed")?; | ||
let mut f = fs::File::open(entry.path()).context("Failed to open log file")?; | ||
io::copy(&mut f, &mut zip).context("Failed to copy log file into zip")?; | ||
for log_path in log_paths().context("Can't compute log paths")? { | ||
add_dir_to_zip(&mut zip, &log_path.src, &stem.join(log_path.dst))?; | ||
} | ||
zip.finish().context("Failed to finish zip file")?; | ||
Ok(()) | ||
Ok::<_, anyhow::Error>(()) | ||
}) | ||
.await | ||
.context("Failed to join zip export task")??; | ||
Ok(()) | ||
} | ||
|
||
/// Reads all files in a directory and adds them to a zip file | ||
/// | ||
/// Does not recurse. | ||
/// All files will have the same modified time. Doing otherwise seems to be difficult | ||
fn add_dir_to_zip( | ||
zip: &mut zip::ZipWriter<std::fs::File>, | ||
src_dir: &Path, | ||
dst_stem: &Path, | ||
) -> Result<()> { | ||
let options = zip::write::FileOptions::default(); | ||
for entry in fs::read_dir(src_dir).context("Failed to `read_dir` log dir")? { | ||
let entry = entry.context("Got bad entry from `read_dir`")?; | ||
let Some(path) = dst_stem | ||
.join(entry.file_name()) | ||
.to_str() | ||
.map(|x| x.to_owned()) | ||
else { | ||
bail!("log filename isn't valid Unicode") | ||
}; | ||
zip.start_file(path, options) | ||
.context("`ZipWriter::start_file` failed")?; | ||
let mut f = fs::File::open(entry.path()).context("Failed to open log file")?; | ||
io::copy(&mut f, zip).context("Failed to copy log file into zip")?; | ||
} | ||
Ok(()) | ||
} | ||
|
||
/// Count log files and their sizes | ||
pub(crate) async fn count_logs_inner() -> Result<FileCount> { | ||
let mut dir = tokio::fs::read_dir(log_path()?).await?; | ||
// I spent about 5 minutes on this and couldn't get it to work with `Stream` | ||
let mut total_count = FileCount::default(); | ||
for log_path in log_paths()? { | ||
let count = count_one_dir(&log_path.src).await?; | ||
total_count.files += count.files; | ||
total_count.bytes += count.bytes; | ||
} | ||
Ok(total_count) | ||
} | ||
|
||
async fn count_one_dir(path: &Path) -> Result<FileCount> { | ||
let mut dir = tokio::fs::read_dir(path).await?; | ||
let mut file_count = FileCount::default(); | ||
|
||
while let Some(entry) = dir.next_entry().await? { | ||
|
@@ -180,7 +219,27 @@ pub(crate) async fn count_logs_inner() -> Result<FileCount> { | |
Ok(file_count) | ||
} | ||
|
||
/// Wrapper around `known_dirs::logs` | ||
pub(crate) fn log_path() -> Result<PathBuf, Error> { | ||
known_dirs::logs().ok_or(Error::CantFindLocalAppDataFolder) | ||
#[cfg(target_os = "linux")] | ||
fn log_paths() -> Result<Vec<LogPath>> { | ||
Ok(vec![ | ||
LogPath { | ||
// TODO: This is magic, it must match the systemd file | ||
src: PathBuf::from("/var/log").join(connlib_shared::BUNDLE_ID), | ||
dst: PathBuf::from("connlib"), | ||
}, | ||
app_log_path()?, | ||
]) | ||
} | ||
|
||
/// Windows doesn't have separate connlib logs until #3712 merges | ||
#[cfg(not(target_os = "linux"))] | ||
fn log_paths() -> Result<Vec<LogPath>> { | ||
Ok(vec![app_log_path()?]) | ||
} | ||
|
||
fn app_log_path() -> Result<LogPath> { | ||
Ok(LogPath { | ||
src: known_dirs::logs().context("Can't compute app log dir")?, | ||
dst: PathBuf::from("app"), | ||
}) | ||
} |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -154,7 +154,7 @@ async function clearLogs() { | |
console.error(e); | ||
}) | ||
.finally(() => { | ||
logCountOutput.innerText = "0 files, 0 MB"; | ||
countLogs(); | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If the permissions are wrong, the GUI may fail to clear all the logs, but it won't show an error because there's no error handling in this part of the code. In that case, I want the display to be accurate. |
||
unlockLogsForm(); | ||
}); | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This would allow users in
firezone
write bogus log files. I think the threat from that is small, and the only alternative I had in mind is issuing a "ClearLogs" command to the IPC service and waiting for a response, which would add a lot of complexity to both the IPC protocol and the log clearing code.So we could do that later if we really need.
But since the GUI decides what goes into the zip file in the end, it may not even matter if it can modify the original logs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but we don't only use those for the zip right? also for streaming them upstream
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, if we re-enable log uploading.
Depending how tight we want the security sandbox, I was thinking the tunnel process shouldn't be allowed to read the user's files, so it would be safer to let the user read the tunnel's files.