Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New landing target messages TARGET_ABSOLUTE/RELATIVE #1943

Merged

Conversation

JonasPerolini
Copy link
Contributor

@JonasPerolini JonasPerolini commented Jan 19, 2023

This pull request extends the LANDING_TARGET message to include position and orientation uncertainties in order to properly fuse the landing target estimations in a Kalman filter PX4/PX4-Autopilot#20653. It also adds the sensor's orientation at the moment of the observation, which is necessary if there is processing time (e.g. detect fiducial marker) between the observation and the message being sent. Finally, it extends the MAV_FRAME message to allow for generic sensor frames. This allows to properly handle the LANDING_TARGET message:

  • if MAV_FRAME == MAV_FRAME_LOCAL_NED, the target is directly sent to the precision landing algorithm.
  • if MAV_FRAME == MAV_FRAME_SENSOR, the target is sent to the Kalman filter which will rotate it into vehicle-carried NED using the sensor's orientation q_sensor_att. The estimation is then sent to the precision landing algorithm.
  • Otherwise: the target is sent to the Kalman filter which will rotate it into vehicle-carried NED using:
    • q_sensor_att if filled
    • the current attitude of the drone (properly transformed into MAV_FRAME).

Changes:

  • Added position and orientation uncertainties to the LANDING_TARGET message
  • Added sensor's orientation at the moment of the observation to the LANDING_TARGET message
  • Extended the MAV_FRAME message to allow for generic sensor frames

Please let me know if there are any further modifications needed.

Update:

We currently have two main tasks using targets:

  1. following a target with the message "FOLLOW_TARGET"
  2. landing on a target with the message "LANDING_TARGET."

However, the main difference between these messages is not the task, but rather the physical location of the sensor used. FOLLOW_TARGET reports observations from a sensor on the target (e.g. GPS or IMUs), while LANDING_TARGET reports observations from an onboard sensor (e.g. vision-based fiducial marker detection or irlock beacon detection). One might want to follow a target using a fiducial marker (LANDING_TARGET msg). Inversely, one might want to land on a target with a GPS attached (FOLLOW_TARGET msg). Each message should thus be usable independently of the task. Additionally, the new target estimator allows to fuse several measurements such as GPS, fiducial markers and irlock beacons in a Kalman Filter to estimate the target's state (for static and moving targets) PX4/PX4-Autopilot#20653. The same filter class can then be used for precision landing and the target following tasks, reinforcing the idea that both messages should be used in parallel.

Proposed changes:

  1. "LANDING_TARGET" becomes "RELATIVE_TARGET" (for onboard sensors such as fiducial markers, irlock beacons, etc.)
  2. "FOLLOW_TARGET" becomes "ABSOLUTE_TARGET" (for sensors on the target such as GPS, IMUs, etc.).
  3. Both "RELATIVE_TARGET" and "ABSOLUTE_TARGET" have a "TARGET_TASK" field, which can either be "TARGET_TASK_FOLLOW" or "TARGET_TASK_LAND," depending on the task at hand.

Additionally, adding a MAV_FRAME field to the ABSOLUTE_TARGET msg allows for measurements to be entered in the target's frame and use the quaternion field to rotate it to the navigation NED frame, making the message less confusing.

@auturgy
Copy link
Collaborator

auturgy commented Jan 19, 2023

This breaks every existing implementation. In my opinion this needs to be new messages, as a pr against development.xml

@JonasPerolini
Copy link
Contributor Author

Hi, thank you for your response @auturgy.

I do see your point, users will have to change the message name, and we will have to slightly change MAVROS. However, having new messages just delays the problem in my opinion because:

  1. It will make the whole target situation confusing i.e. should I use the new messages or the old ones.

  2. The proposed changes keep the existing fields, therefore, the user only needs to change a name which seems reasonable when considering the benefits (i.e. new Kalman Filter for target tracking, please have a look at Landing Target Estimator refactor PX4/PX4-Autopilot#20653).

  3. Eventually, we will keep either the new messages or the old ones, and "every existing implementation will break".

  4. New messages means new handling in the mavlink_receiver and the SimulatorMavlink

  5. The current handling of the landing target is extremely limited:

    • Only Irlock reports are being forwarded to the Kalman filter. However, I would assume that most users use the irlock plugin which is directly sent to the Kalman filter and does not require the MAVLINK msg.
    • The landing_target.type field is not even used.
    • All other observations must be handled directly by the user and be somehow converted to local NED.

    Therefore, the main (if not only) used case is to send a target in local NED through MAVLINK which is directly sent to the precision landing algorithm. With the new messages, this will not break, we can clearly state in the documentation that handled observation must be sent in local NED in the RELATIVE_TARGET msg (note: the ABSOLUTE_TARGET does not have a position field, only lat/lon so I don't see how someone would confuse the two).

Now there is another option: we keep the same message names (LANDING_TARGET and FOLLOW_TARGET), we add the proposed fields which drastically improve the overall behavior (see first post) and we clearly explain in the documentation that the names will eventually change to RELATIVE_TARGET and ABSOLUTE_TARGET (or any other msg name which does not contain "land" or "follow"). With this solution, no current implementation breaks and we keep all the benefits.

Would that work for you @auturgy ?

@auturgy
Copy link
Collaborator

auturgy commented Jan 20, 2023

This changes the crc: if someone updates their flight controller to a version that includes mavlink with your changes, but doesn't update a fully functional offboard implementation of follow/prec land (common scenario), the messages will be rejected by the parser. Needs to be new messages against development.xml

@JonasPerolini
Copy link
Contributor Author

Summary and use:

New messages in development.xml:

  • ABSOLUTE_TARGET for sensors on the target e.g. GPS
  • RELATIVE_TARGET for onboard sensors detecting the target e.g. fiducial marker, irlock, ...
  • Each message contains:
    • TARGET_TASK which can be FOLLOW or LAND. This will allow to use the same Kalman Filter for precision landing and target following in the future.
    • q_sensor_att, the attitude of the sensor at the time of the measurement. The main purpose is to avoid overshoots due to target detection processing time e.g. if we have a complex target detection algorithm that must process the frame for 1 second, we absolutely need the attitude of the sensor when the camera frame was grabbed, not when the msg was sent. Note: it also allows to only send a diagonal covariance matrix (which will become full once we rotate it into the KF frame vc-NED covRotated = R * covDiag * R^T)
    • Variance of the observations, for proper Kalman Filtering

Logic:

  • depending on the TARGET_TASK, the observation is forwarded to the landing_target estimator or to the follow_target estimator.
  • The Kalman Filter fuses data in vehicle-carried NED frame to avoid being dependent on a local position estimate. Therefore, If the target is sent in MAV_FRAME == MAV_FRAME_LOCAL_NED the measurement is either:
    • sent directly to the precision landing algorithm if the KF is disabled (same as current implementation).
    • ignored if the KF is enabled. (We assume that there are other observations)

use case example: Land on a target which is equipped with a GPS and a fiducial marker.

  • GPS data will be sent via ABSOLUTE_TARGET
  • Fiducial marker data will be sent via RELATIVE_TARGET

Important note: with the new modifications, the current implementations using the messages LANDING_TARGET and FOLLOW_TARGET do not break:

  • irlock observation from LANDING_TARGET goes into the KF (as done right now)
  • fiducial marker observation from LANDING_TARGET goes straight to the precision landing algorithm, if the Kalman Filter is disabled (default state)

The goal is to eventually replace LANDING_TARGET and FOLLOW_TARGET by RELATIVE_TARGET and ABSOLUTE_TARGET

@hamishwillee
Copy link
Collaborator

@dakejahl You've done a landing target implementation - does this make sense to you? Would this help or harm your implementation? Is there anyone else you can suggest as a reviewer?

Note that you kind of need to read the top description to get the full picture, even though the final "implementation" is in #1943 (comment)

I'm not sure about the names - I didn't read "ABSOLUTE" and think "onboard".

  • GPS data will be sent via ABSOLUTE_TARGET -
  • Fiducial marker data will be sent via RELATIVE_TARGET

@potaito
Copy link
Contributor

potaito commented Jan 25, 2023

@hamishwillee
Onboard and Offboard are often two confusing terms, which is why we were looking for new names. Both terms can mean anything really. Particularly in this case you can have an algorithm run offboard or onboard, but produce an absolute measurement in both cases.

  • Offboard and Onboard often refer to where code is executed
  • Absolute and Relative tries to tell that it does not matter where the code is running, but what kind of reference frame it is

Suggestions more than welcome of course 😄

@hamishwillee
Copy link
Collaborator

@potaito I get that; the problem is that "absolute" might be a fixed frame, but the actual target can be moving. In other words, correct but not intuitive to me.

Thanks for updating the ids. I'm waiting on @dakejahl for a sanity check - I don't have the background to assess this.

@JonasPerolini
Copy link
Contributor Author

@hamishwillee would you prefer TARGET_OBS_GROUND and TARGET_OBS_DRONE?

One other option would be to merge the two messages together into a big one called TARGET_OBS. I'm not familiar with the maximum message size requirements. This would complete the current RELATIVE_TARGET msg with a velocity field (e.g. using optical flow to get the target's relative velocity).

@hamishwillee
Copy link
Collaborator

@JonasPerolini What I'd prefer is feedback from others who are doing implementations and are at the right expertise level; I'm not competent in the domain to know what is useful information and what is intuitive to someone working in the area. So I won't offer an opinion at this time.

As the messages are quite different I'd prefer them separate as you have done; no point sending lots of unused information. FYI though, the packet length is about 255 bytes, so this would be doable if it was desired: https://mavlink.io/en/guide/serialization.html

@dakejahl
Copy link
Contributor

  • type How is this data useful? Is this data actionable?
  • task I don't think it makes sense for the target to know what its task is. If you want to use a target for a specific task this should be defined within the autopilot. For example use a parameter to associate the tag ID with the specific task.
  • id isn't very useful on its own. If you were using aruco tags you'd want to know both the dictionary and the tag ID. One simple option would be to use an enum, this would have the benefit of defining "ID"s for things other than aruco tags which may have a different convention (for example IR beacons). Although would require an update every time someone wants to add a new definition. I'm open to suggestions but IMO just a uint8_t id is not enough information. Another option could be id and dict with different definitions depending on the type (so we'd need the type field in this case), but this increases the message size.
ARUCO_DICT_4X4_50_ID_1
ARUCO_DICT_4X4_50_ID_2
ARUCO_DICT_4X4_100_ID_1
ARUCO_DICT_4X4_250_ID_1
ARUCO_DICT_7X7_50_ID_1
IR_BEACON_ID_1
IR_BEACON_ID_2
etc ...
  • custom_state is this really necessary? This seems too device-specific
  • est_capabilities is this used to determine which fields of the message are populated? e.g. lat/lon if bit 0 is set?
  • rates RPY? How is this useful?
  • frame are frames other than MAV_FRAME_LOCAL_NED actually useful?

Maybe rename to TARGET_ABSOLUTE and TARGET_RELATIVE
Maybe rename yaw_var to q_var
Maybe rename q_att to q_target
Maybe rename q_sensor_att to q_sensor

You will want to have both q_att and q_sensor_att regardless of the frame. As you stated above, you need to know the sensor pose and target pose at the time of measurement.

Keep in mind you probably don't want your drone to follow or land on targets that are not yours or that are not part of the currently active operation. A drone should be able to dynamically change which target is associated with which task. Hence the need for a unique identifier as well as the task being dictated by the autopilot rather than the target.

@JonasPerolini
Copy link
Contributor Author

JonasPerolini commented Jan 27, 2023

Thanks for the detailed review @dakejahl !

Renaming:

  • ok to rename q_att to q_target
  • ok to rename q_sensor_att to q_sensor
  • ok to rename ABSOLUTE_TARGET and RELATIVE_TARGET to TARGET_ABSOLUTE and TARGET_RELATIVE respectively
  • I prefer yaw_var as it is the uncertainty of the yaw, not the uncertainty of the quaternion

Field changes:

  • custom_state, est_capabilities, rates to be discussed with @potaito
  • type, frame, id, task should not change in my opinion, see detail below. Please let me know what you think.

type How is this data useful? Is this data actionable?

Depending on the target type, you have to forward it to different messages e.g. irlock_report, fiducial_marker_report, ... which do not necessarily require the same processing. (Even if the processing is the same, you still need to know where the observation comes from to properly fill the innovation_msg which is crucial for debugging).

id isn't very useful on its own. If you were using aruco tags you'd want to know both the dictionary and the tag ID.

I agree, we cannot have the id be the ID of the marker for the specific detection algorithm (e.g. aruco marker 5 in dict 4x4) as every detection algorithm (e.g. Aruco, AprilTag) will be different. Detecting one specific tag but not another one, must be handled either off-board or in a driver. It has no place in a message in my opinion. The purpose of the id field is to allow the user to define an identifier for a given target. In the future, we could run multiple KF instance in parallel i.e. track 2 different targets, and differentiate which one goes into which KF using the id.

task I don't think it makes sense for the target to know what its task is.

For me the task is the simplest way to associate a target to an estimator (e.g. KF FOLLOW , KF LAND). It also allows for complex and unique off-board behaviors e.g. dynamically changing the task. Note that the task is only here to forward the observation to the target estimator, it does not dictate the behavior of the drone. One can track a landing target and a follow target at the same time.

  • If you want to use a target for a specific task this should be defined within the autopilot. For example use a parameter to associate the tag ID with the specific task.

  • What if you want to follow a target and then land on it? You would have to associate this specific tag ID to both tasks somehow. Also the target estimator (or MAVLINK receiver) would have to check the list of given tag ids and compare them. It is much simpler to change the task off-board in my opinion.

  • Keep in mind you probably don't want your drone to follow or land on targets that are not yours or that are not part of the currently active operation.

  • Totally agree, but this should be done off-board or in a driver. It makes no sense to send a target that is not part of the active operation via MAVLINK, and rely on the target tracker to ignore it. In my opinion, the drone's job given a target is to track it (of course the drone must check that the values in the fields make sense, but not the target id).

  • A drone should be able to dynamically change which target is associated with which task. Hence the need for a unique identifier as well as the task being dictated by the autopilot rather than the target.

  • Yes, it should be able to dynamically change which target is associated with which task, for example following a target and then landing on it. Why can't this work using the task field?

frame are frames other than MAV_FRAME_LOCAL_NED actually useful?

Let's say you have an extremely fast detection algorithm in body frame which requires no processing time. Then you only need to fill the MAV_FRAME , and since you don't specify q_sensor_att, the current attitude will be used. Otherwise, the user somehow must get the attitude of the sensor off-board which is overkill as there is negligible processing time.

custom_state is this really necessary? This seems too device-specific
est_capabilities is this used to determine which fields of the message are populated? e.g. lat/lon if bit 0 is set?
rates RPY? How is this useful?

I've added those to keep compatibility with the FOLLOW_TARGET msg. From my limited understanding, rates is used when following cars to include the steering wheel input. @potaito can provide a better explanation.

@hamishwillee
Copy link
Collaborator

W.r.t. names we currently have:

  • GPS data will be sent via TARGET_ABSOLUTE
  • Fiducial marker data will be sent via TARGET_RELATIVE

So TARGET_ABSOLUTE is for targets that know their position in an absolute frame - such as GPS? TARGET_RELATIVE is where the target is in some frame relative to the vehicle, and where the location is generated from data/sensors on the vehicle? That makes sense.

@dakejahl Can you respond to the response?

Would/will you use them? IF you were designing the messages would they look like this? Are there any missing use cases. Are they awesome improvement.

I have nothing useful to add other than w.r.t. "I've added those to keep compatibility with the FOLLOW_TARGET" - IMO we need to find out if anyone uses them or has a use case that requires them. Then look and see whether the use case is met in this case without them. If not, I'd suggest we prototype without them.**

@auturgy
Copy link
Collaborator

auturgy commented Feb 1, 2023

@rishabsingh3003 can you have a look at this please

@JonasPerolini
Copy link
Contributor Author

Following the MAVLink coordination call, the MAV_FRAME enum will not be changed, as MAV_FRAME_SENSOR is too generic. To solve the issue:

  • In TARGET_RELATIVE: I've created a new enum: TARGET_OBS_FRAME which contains the frame of the target's observation from an onboard sensor.
  • In TARGET_ABSOLUTE: The MAV_FRAME is not needed as the position is in lat/lon (WGS84) and the velocity/acc are in the target's body frame. The field q_target allows to convert from the target's body frame to NED.

Note: Now all the changes are in development.xml.

@hamishwillee
Copy link
Collaborator

Thanks very much @JonasPerolini - and for taking the time to come to the MAV call and discuss. My summary of the discussion in 20230201-Dev-Meeting

  • Need to change the addition to MAV_FRAME - it is inconsistent and that enum is a pain point. - which I can see you have already done.
  • We need more review from domain experts - requested.
  • We don't want you to wait on the MAVLink process - it will delay your project.
    • Please create your own fork and start development
    • Continue discussion on PR to get best possible messages.
    • Thanks for contributing!

@JonasPerolini
Copy link
Contributor Author

Here's how I would use the Mavlink messages for the Kalman filter. (The id field is ignored, but it can be added once we have several estimators)

void
MavlinkReceiver::handle_message_target_absolute(mavlink_message_t *msg)
{
	mavlink_target_absolute_t target_absolute;
	mavlink_msg_target_absolute_decode(msg, &target_absolute);

	landing_target_gnss_s target_GNSS_report{};
	target_GNSS_report.timestamp = target_absolute.timestamp;

	bool updated = false;

	// Position: bit 0
	if (!(target_absolute.est_capabilities & (1 << 0))) {
		target_GNSS_report.abs_pos_updated = false;

	} else {
		target_GNSS_report.abs_pos_updated = true;
		target_GNSS_report.lat = target_absolute.lat;
		target_GNSS_report.lon = target_absolute.lon;
		target_GNSS_report.alt = target_absolute.alt * 1000.f; //Convert to [mm]
		target_GNSS_report.eph = target_absolute.position_std[0];
		target_GNSS_report.epv = target_absolute.position_std[1];
		updated = true;
	}

	// Velocity: bit 1
	if (!(target_absolute.est_capabilities & (1 << 1))) {
		target_GNSS_report.vel_ned_updated = false;

	} else {
		target_GNSS_report.vel_ned_updated = true;
		target_GNSS_report.vel_n_m_s = target_absolute.vel[0];
		target_GNSS_report.vel_e_m_s = target_absolute.vel[1];
		target_GNSS_report.vel_d_m_s = target_absolute.vel[2];
		target_GNSS_report.s_variance_m_s = target_absolute.vel_std[0];
		updated = true;
	}

	if (updated) { _landing_target_gnss_pub.publish(target_GNSS_report); }
}
void
MavlinkReceiver::handle_message_target_relative(mavlink_message_t *msg)
{
	mavlink_target_relative_t target_relative;
	mavlink_msg_target_relative_decode(msg, &target_relative);

	vehicle_attitude_s	vehicle_attitude;
	vehicle_local_position_s vehicle_local_position;

	bool publish_target = false;
	matrix::Quatf q_sensor(target_relative.q_sensor);

	// Unsupported sensor frame
	if (target_relative.frame != TARGET_OBS_FRAME_LOCAL_OFFSET_NED && target_relative.frame != TARGET_OBS_FRAME_LOCAL_NED &&
	    target_relative.frame != TARGET_OBS_FRAME_BODY_FRD && target_relative.frame != TARGET_OBS_FRAME_OTHER) {
		mavlink_log_critical(&_mavlink_log_pub, "landing target: coordinate frame %" PRIu8 " unsupported.\t",
				     target_relative.frame);
		events::send<uint8_t>(events::ID("mavlink_rcv_target_rel_unsup_coord_frame"), events::Log::Error,
				      "Target relative: unsupported coordinate frame {1}", target_relative.frame);

	} else {
		/* Check if the sensor's attitude field is filled. */
		if ((abs(q_sensor(0)) + abs(q_sensor(1)) + abs(q_sensor(2)) + abs(q_sensor(3))) > (float)1e-6) {
			publish_target = true;

		} else {
			/* Fill q_sensor depending on the frame*/
			if (target_relative.frame == TARGET_OBS_FRAME_LOCAL_OFFSET_NED || target_relative.frame == TARGET_OBS_FRAME_LOCAL_NED) {
				// Observation already in NED, set quaternion to identity (1,0,0,0)
				q_sensor.setIdentity();
				publish_target = true;

			} else if (target_relative.frame == TARGET_OBS_FRAME_BODY_FRD && _vehicle_attitude_sub.update(&vehicle_attitude)) {
				// Set the quaternion to the current attitude
				const matrix::Quaternionf quat_att(&vehicle_attitude.q[0]);
				q_sensor = quat_att;
				publish_target = true;

			} else if (target_relative.frame == TARGET_OBS_FRAME_OTHER) {
				// Target sensor frame: OTHER and no quaternion is provided
				mavlink_log_critical(&_mavlink_log_pub,
						     "landing target: coordinate frame %" PRIu8 " unsupported when the q_sensor field is not filled.\t",
						     target_relative.frame);
				events::send<uint8_t>(events::ID("mavlink_rcv_target_rel_q_sensor_not_filled"), events::Log::Error,
						      "Target relative: unsupported coordinate frame {1} when the q_sensor field is not filled", target_relative.frame);
			}
		}
	}

	if (publish_target) {
		int32_t ltest_enabled;
		param_get(param_find("LTEST_EN"), &ltest_enabled);

		if (!ltest_enabled) {

			// If the landing target estimator is disabled, send the target directly to the precision landing algorithm in local NED frame
			landing_target_pose_s landing_target_pose{};

			// Measurement already in local NED
			if (target_relative.frame == TARGET_OBS_FRAME_LOCAL_NED) {

				landing_target_pose.timestamp = _mavlink_timesync.sync_stamp(target_relative.timestamp);
				landing_target_pose.x_abs = target_relative.x;
				landing_target_pose.y_abs = target_relative.y;
				landing_target_pose.z_abs = target_relative.z;
				landing_target_pose.abs_pos_valid = true;

				_landing_target_pose_pub.publish(landing_target_pose);

			} else if (_vehicle_local_position_sub.update(&vehicle_local_position) && vehicle_local_position.xy_valid) {

				// If the local position is available, convert from sensor frame to local NED
				matrix::Vector3f target_relative_frame(target_relative.x, target_relative.y, target_relative.z);

				if (target_relative.frame == TARGET_OBS_FRAME_BODY_FRD || target_relative.frame == TARGET_OBS_FRAME_OTHER) {
					// Rotate measurement into vehicle-carried NED
					target_relative_frame = q_sensor.rotateVector(target_relative_frame);
				}

				// Convert from vehicle-carried NED to local NED
				landing_target_pose.x_abs = target_relative_frame(0) + vehicle_local_position.x;
				landing_target_pose.y_abs = target_relative_frame(1) + vehicle_local_position.y;
				landing_target_pose.z_abs = target_relative_frame(2) + vehicle_local_position.z;
				landing_target_pose.abs_pos_valid = true;

				_landing_target_pose_pub.publish(landing_target_pose);
			}

		} else {

			// Send target to the landing target estimator
			if (target_relative.type == LANDING_TARGET_TYPE_LIGHT_BEACON) {

				irlock_report_s irlock_report{};

				irlock_report.timestamp = hrt_absolute_time();
				irlock_report.pos_x = target_relative.x;
				irlock_report.pos_y = target_relative.y;

				q_sensor.copyTo(irlock_report.q);
				_irlock_report_pub.publish(irlock_report);

			} else if (target_relative.type == LANDING_TARGET_TYPE_VISION_FIDUCIAL) {

				// Position report
				fiducial_marker_pos_report_s fiducial_marker_pos_report{};

				fiducial_marker_pos_report.timestamp = _mavlink_timesync.sync_stamp(target_relative.timestamp);
				fiducial_marker_pos_report.x_rel_body = target_relative.x;
				fiducial_marker_pos_report.y_rel_body = target_relative.y;
				fiducial_marker_pos_report.z_rel_body = target_relative.z;

				fiducial_marker_pos_report.cov_x_rel_body = target_relative.pos_var[0];
				fiducial_marker_pos_report.cov_y_rel_body = target_relative.pos_var[1];
				fiducial_marker_pos_report.cov_z_rel_body = target_relative.pos_var[2];

				q_sensor.copyTo(fiducial_marker_pos_report.q);
				_fiducial_marker_pos_report_pub.publish(fiducial_marker_pos_report);

				// Yaw report
				fiducial_marker_yaw_report_s fiducial_marker_yaw_report{};
				fiducial_marker_yaw_report.timestamp = _mavlink_timesync.sync_stamp(target_relative.timestamp);

				// Transform quaternion from the target's frame to the TARGET_OBS_FRAME to the yaw relative to NED
				const matrix::Quatf q_target(target_relative.q_target);
				const matrix::Quatf q_in_ned = q_sensor * q_target;
				const float target_yaw_ned = matrix::wrap_pi(matrix::Eulerf(q_in_ned).psi());

				fiducial_marker_yaw_report.yaw_ned = target_yaw_ned;
				fiducial_marker_yaw_report.yaw_var_ned = target_relative.yaw_var;
				_fiducial_marker_yaw_report_pub.publish(fiducial_marker_yaw_report);

			}
		}
	}
}

@JonasPerolini
Copy link
Contributor Author

Status: every conversation is resolved, there is an example of use case.

For me (and @potaito I believe) it's ready to be merged in dev once @hamishwillee confirms the following changes: id name and sensor capabilities flags.

@potaito
Copy link
Contributor

potaito commented Feb 24, 2023

Status: every conversation is resolved, there is an example of use case.

For me (and @potaito I believe) it's ready to be merged in dev once @hamishwillee confirms the following changes: id name and sensor capabilities flags.

yep 😊

@hamishwillee
Copy link
Collaborator

We agreed in the mavlink call last night that we are happy for this to be merged for prototyping. I'm happy to do this, but we'll need to get the schema updated to allow some of the new units - see https://github.com/mavlink/mavlink/pull/1943/files#r1137815803

Once that is done, please ping me/cross link and I'll see if I can expedite that.

@kripper
Copy link

kripper commented Mar 20, 2023

Hi,

I'm the current maintainer of RossetaDrone: a MAVLink 2 implementation for DJI drones and a "flightcontroller layer" built on top of DJI's SDK.

I just released Vision Landing 2 and was now implementing the code for landing on the target when I felt that the MAVLink message definition was a bit too limited given the many different options and ideas that could/should be applied to accomplish a fast and smooth landing on the target:
RosettaDrone/rosettadrone#132

We would like to contribute to this topic and implement something standard and well defined from the beginning (good practices).
I believe the only difference in our use case is that we additionaly have a closed-source positioning system provided by DJI that is out of our control and that we may or may not use for landing (I have a hunch that visual computing may be more accurate or reliable).
Besides that, our use case should be the same, so we would like to work on and help to maintain a shared code base/model. I'm 100% dedicated to this topic, so please give me your feedback or let me know how I can contribute.

I'm now reading more in detail all your previous comments and progress.

@potaito
Copy link
Contributor

potaito commented Mar 20, 2023

Hey @kripper
The best thing you can do is read the proposed message definitions in the diff and check

  • whether you understand them / they make sense to you
  • whether they would support your use case without limitations

@hamishwillee
Copy link
Collaborator

@kripper Your contribution would be very welcome - we'd like to get this right for as many use cases as possible. As @potaito says, the best way to start this is to review the messages, check that you understand them, and that they meet your use case.

If they do not meet your requirements, you can then either add comments to this PR, or if it is closed, add a follow on PR or issues that expand on the problems you see.

Note that they are likely to be broader than your use case because we're looking at all sorts of targets, not just vision.

I had a very brief look at RosettaDrone/rosettadrone#132 . I don't have the expertise to add useful comments sorry.

@kripper
Copy link

kripper commented Mar 21, 2023

Target IDs: we use multiple markers for 1 landing point (target). It makes sense that the target id is defined by the user.

A landing yaw is important (auto charging).

Do we have a draft defining good practices for the precision landing?

@hamishwillee
Copy link
Collaborator

Do we have a draft defining good practices for the precision landing?

Not within MAVLink. The only documentation I know of that might touch on this is the flight stack user guides (which are very much "how do you get it working":

As part of accepting these messages into common we will need good docs explaining the use of the interfaces.

@potaito
Copy link
Contributor

potaito commented Mar 23, 2023

@kripper

A landing yaw is important (auto charging).

That would be part of a precision landing command message. I.e. the messages in this PR communicate the position of the marker / tag. And another message that tells the drone to perform precision landing would ideally convey this yaw angle.

Do we have a draft defining good practices for the precision landing?

Not yet. This PR is half of the equation and aims to get the messages for the measurements / observations right this time. In the next step, we could look at whether a new command message needs to be added.

@JonasPerolini
Copy link
Contributor Author

Hi @hamishwillee, I've updated the messages to use the standard deviation ([m],[rad]) instead of the variance. I see that you've already pushed the other changes, thanks!

Anything else needed?

@hamishwillee hamishwillee merged commit 9d03fcd into mavlink:master Mar 29, 2023
@hamishwillee
Copy link
Collaborator

@JonasPerolini Thanks, no this is good - merged.

@potaito @JonasPerolini Keep me informed please. Very pleased if we can verify these do what we need with hard data and get documentation / tests in place that allow us to make sure that it is possible to create compatible implementations.

@potaito
Copy link
Contributor

potaito commented Mar 29, 2023

Thanks, that's great!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants