Conversation
Group instances by the sleds they are running on. Options to display specific sleds by cubby or serial number, or a range of cubbies.
| struct SledInstanceRow { | ||
| instance_id: String, | ||
| state: String, | ||
| name: String, |
There was a problem hiding this comment.
I kind of feel like there is more data i would want to have in here; at minimum, I would hope that we also included the instance's Propolis UUID. Perhaps we could refactor this to share more code with the CustomerInstanceRow in omdb db instances? We could change this:
omicron/dev-tools/omdb/src/bin/omdb/db.rs
Lines 5501 to 5506 in 594db71
so that the common fields are in a struct that we would also use here, and then have
omdb db instance list embed that struct in one that also adds the host serial/ID?
There was a problem hiding this comment.
Here is how propolis IDs could look after making the change above:
root@oxz_switch0:~# /tmp/omdb db sled-instances --sled 15,16 2> /dev/null
Sled 15 (serial: BRM42220046) sled_id: 20d857aa-043c-49bd-946e-b3a66f849f5b
INSTANCE_ID PROPOLIS_ID STATE NAME
0ac44fac-7d3d-4e98-996d-eaa7f130cdf7 1c7c3973-66eb-4ff0-b9d2-a22d2efc4569 running four-inst
f486a222-871e-439e-9e5f-9593ec617aa1 2c66acdb-97a0-47dc-983d-74546a6723c2 running many-14-inst
b5fcae99-f87c-420b-b2fa-2055c148f804 ad270ace-e1ee-431b-8180-7bf1b8be2769 running many-10-inst
48d8364f-26bd-4ca5-a99b-70390998832f e2ce08c6-40d2-46d1-89e4-9e298d3dbb93 running many-18-inst
Sled 16 (serial: BRM42220007) sled_id: a0653160-81f7-4e78-97ef-721eae8e8c38
INSTANCE_ID PROPOLIS_ID STATE NAME
5cb00f31-1283-4db2-a97e-7276b3da9600 0a0cde5f-7a94-4205-b75d-0df48f0f6226 running many-19-inst
750aeb06-8b6e-4f93-944f-73cff1f3e26a 0f4deb3d-d7c3-4bc0-8d77-76a1ebad9914 running focal
c66a10d6-542d-4678-861e-563a0f59c19a 27ef10d5-8d3e-4c26-af31-70a2f928ebad running many-15-inst
573efd67-c58b-4fca-b64e-d1cb7560b12a 2bb44274-2bcf-49ac-bd90-d2ff22ed1681 running many-5-inst
428396d5-36e5-42c7-9f21-a891410be917 e6cd1fdb-afb7-41c3-8ffd-e23bc39adf44 running many-12-inst
| // Step 2: Fetch all non-deleted instances joined with their | ||
| // active VMMs. | ||
| let limit = fetch_opts.fetch_limit; | ||
| let instances: Vec<InstanceAndActiveVmm> = dsl::instance | ||
| .filter(dsl::time_deleted.is_null()) | ||
| .left_join( | ||
| vmm_dsl::vmm.on(vmm_dsl::id | ||
| .nullable() | ||
| .eq(dsl::active_propolis_id) | ||
| .and(vmm_dsl::time_deleted.is_null())), | ||
| ) | ||
| .limit(i64::from(u32::from(limit))) | ||
| .select((Instance::as_select(), Option::<Vmm>::as_select())) | ||
| .load_async(&*datastore.pool_connection_for_tests().await?) | ||
| .await | ||
| .context("loading instances")? | ||
| .into_iter() | ||
| .map(|i: (Instance, Option<Vmm>)| i.into()) | ||
| .collect(); | ||
|
|
||
| check_limit(&instances, limit, || "listing instances".to_string()); | ||
|
|
||
| // Step 3: Group instances by sled_id (skip those with no | ||
| // active VMM / no sled). | ||
| let mut instances_by_sled: BTreeMap<SledUuid, Vec<&InstanceAndActiveVmm>> = | ||
| BTreeMap::new(); | ||
| let mut unmatched: Vec<&InstanceAndActiveVmm> = Vec::new(); | ||
|
|
||
| for inst in &instances { | ||
| let sled_id = match inst.sled_id() { | ||
| Some(id) => id, | ||
| None => continue, // no active VMM, skip | ||
| }; | ||
| if sled_info.contains_key(&sled_id) { | ||
| instances_by_sled.entry(sled_id).or_default().push(inst); | ||
| } else { | ||
| unmatched.push(inst); | ||
| } | ||
| } |
There was a problem hiding this comment.
it feels a bit odd to me that we do one big query to fetch every non-deleted instance, with one big fetch limit, and then we add them to a map of instances by sled, when we could just filter on the VMM record's sled UUID?
wouldn't it be nicer to implement this as:
- fetch all matching sleds and filter them as we do above
- sort sleds
- loop over sorted sleds, query for instances belong to that sled, and print the table sled-by-sled
There was a problem hiding this comment.
I've updated the loop, and yeah, it's much cleaner.
This new way means we won't print any instances that don't have a matching sled, which would be an error any way and unlikely to happen, or happen if things are inflight between making the list of sleds and gathering the list of instances.
Make a common instance struct that can be shared
| vmm_dsl::vmm.on(vmm_dsl::id | ||
| .nullable() | ||
| .eq(dsl::active_propolis_id) | ||
| .and(vmm_dsl::time_deleted.is_null()) |
There was a problem hiding this comment.
i wonder if we might want to have the include_deleted option be honored here? if so, we would want to indicated deletedness in the output...
I'm not in love with the command name,
omdb db sled-instances, and open to other possible choices.Here is an example of how this looks: