Skip to content

72 feature add nsupdate support#73

Merged
kweonminsung merged 11 commits intodevelopfrom
72-feature-add-nsupdate-support
Mar 24, 2026
Merged

72 feature add nsupdate support#73
kweonminsung merged 11 commits intodevelopfrom
72-feature-add-nsupdate-support

Conversation

@kweonminsung
Copy link
Copy Markdown
Owner

No description provided.

Copilot AI review requested due to automatic review settings March 24, 2026 11:02
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR introduces RFC 2136-style dynamic DNS updates (nsupdate) and restructures the DNS/XFR implementation by replacing the standalone xfr server with a unified dns listener handling UDP/TCP SOA, XFR, and nsupdate routing.

Changes:

  • Replace the old xfr module/server with a new dns module that listens on UDP+TCP port 53 and routes queries by opcode/qtype.
  • Add nsupdate parsing + application logic that updates records, bumps zone serials, stores snapshots, and triggers NOTIFY.
  • Update configuration and docs to use [dns] keys and document nsupdate usage.

Reviewed changes

Copilot reviewed 23 out of 28 changed files in this pull request and generated 9 comments.

Show a summary per file
File Description
tests/fixture/bindizr.conf.toml Switch fixture config from [xfr] to [dns] and add nsupdate_allowed_ips.
src/xfr/server.rs Deleted old dedicated XFR TCP server implementation.
src/xfr/mod.rs Deleted old XFR module initializer/spawn logic.
src/socket/server/notify.rs Repoint NOTIFY socket handler to dns::xfr::notify.
src/main.rs Replace mod xfr with mod dns.
src/lib.rs Export dns module instead of xfr.
src/error/mod.rs Update top-level error import path for XfrError.
src/dns/mod.rs New unified DNS UDP/TCP listener with routing to nsupdate/SOA/XFR.
src/dns/acl.rs New ACL helpers for secondary servers and nsupdate allowlist.
src/dns/xfr/mod.rs New XFR handler entrypoints (TCP/UDP) and ACL validation.
src/dns/xfr/wire.rs New DNS wire encoding/parsing + TCP framing helpers.
src/dns/xfr/notify.rs Update NOTIFY secondary list config key to dns.secondary_addrs.
src/dns/xfr/ixfr.rs New IXFR implementation using zone_changes + snapshots.
src/dns/xfr/delta.rs New access layer for zone change/snapshot data used by IXFR.
src/dns/xfr/error.rs New XfrError definition under dns::xfr.
src/dns/xfr/catalog.rs Remove catalog SOA helper (moved to SOA module).
src/dns/xfr/axfr.rs Remove SOA handler (moved to SOA module).
src/dns/soa/mod.rs New SOA handlers for TCP+UDP, including catalog zone SOA responses.
src/dns/nsupdate/mod.rs New nsupdate request detection/dispatch + response generation.
src/dns/nsupdate/parser.rs New DNS UPDATE parser for zone + update sections.
src/dns/nsupdate/update.rs New logic applying dynamic updates to DB, logging changes, bumping serial, snapshotting, and notifying.
src/cli/mod.rs Bootstrap now initializes dns instead of xfr.
src/api/service/zone.rs NOTIFY now routed through dns::xfr::notify.
src/api/service/record.rs NOTIFY now routed through dns::xfr::notify.
bindizr.conf.toml Config moves from [xfr] to [dns] and adds nsupdate_allowed_ips.
README.md Document nsupdate feature and new [dns] config section.
.github/workflows/manual-release.yml Change release name formatting.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +122 to +157
loop {
let (len, client_addr) = match socket.recv_from(&mut buf).await {
Ok(v) => v,
Err(e) => {
log_error!("Failed to receive DNS UDP packet: {}", e);
continue;
}
};

let query_data = &buf[..len];

match classify_query_route(query_data) {
Ok(QueryRoute::Nsupdate) => {
if let Err(e) =
nsupdate::handle_udp_nsupdate(&socket, query_data, client_addr).await
{
log_error!("NSUPDATE UDP handler failed for {}: {}", client_addr, e);
}
}
Ok(QueryRoute::Soa) => {
if let Err(e) =
soa::handle_udp_soa(&socket, client_addr, &secondary_servers, query_data).await
{
log_warn!("Failed to handle SOA UDP query from {}: {}", client_addr, e);
}
}
Ok(QueryRoute::Xfr) => {
if let Err(e) =
xfr::handle_udp_query(client_addr, &secondary_servers, query_data).await
{
log_warn!("Failed to handle XFR UDP query from {}: {}", client_addr, e);
}
}
Ok(QueryRoute::Other(_)) => {}
Err(_) => {}
}
Copy link

Copilot AI Mar 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The UDP server loop awaits the full request handling (including DB work for nsupdate) inline. A slow update or transfer check will block processing of all subsequent UDP packets, which can cause packet loss/timeouts under load. Consider spawning a task per packet (or using a bounded work queue) so the receive loop stays responsive.

Copilot uses AI. Check for mistakes.
Comment on lines +54 to +93
let qdcount = u16::from_be_bytes([data[4], data[5]]) as usize;
let ancount = u16::from_be_bytes([data[6], data[7]]) as usize;
let nscount = u16::from_be_bytes([data[8], data[9]]) as usize;
let arcount = u16::from_be_bytes([data[10], data[11]]) as usize;

if qdcount != 1 {
return Err(ParseError::InvalidHeader);
}

let mut pos = DNS_HEADER_LEN;

let (zone_name, consumed) = decode_name(data, pos)?;
pos += consumed;

if pos + 4 > data.len() {
return Err(ParseError::InvalidZoneSection);
}

let _ztype = u16::from_be_bytes([data[pos], data[pos + 1]]);
let zclass = u16::from_be_bytes([data[pos + 2], data[pos + 3]]);
pos += 4;

if zclass != CLASS_IN {
return Err(ParseError::InvalidZoneSection);
}

for _ in 0..ancount {
pos = skip_rr(data, pos)?;
}

let mut updates = Vec::with_capacity(nscount);
for _ in 0..nscount {
let (rr, next) = parse_rr(data, pos)?;
updates.push(rr);
pos = next;
}

for _ in 0..arcount {
pos = skip_rr(data, pos)?;
}
Copy link

Copilot AI Mar 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

parse_update_request currently skips the prerequisite section (ancount) and proceeds to apply updates regardless. In RFC 2136, prerequisites are semantically significant; ignoring them can apply updates that should have been rejected. If prerequisites (or additional records, e.g. TSIG in arcount) are present, consider rejecting with NOTIMP/REFUSED until they are implemented, rather than silently skipping them.

Copilot uses AI. Check for mistakes.
Comment on lines +109 to +117
let mut response = vec![0u8; DNS_HEADER_LEN];

response[0] = query_data[0];
response[1] = query_data[1];

let opcode_bits = query_data[2] & 0x78;
let rd_bit = query_data[2] & 0x01;

response[2] = 0x80 | opcode_bits | rd_bit;
Copy link

Copilot AI Mar 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

build_response returns only a 12-byte header with all section counts left as 0. Many DNS clients expect the question/zone section to be echoed (and the counts to reflect the included sections); returning an empty message body can lead to clients treating the response as malformed. Consider copying the request’s zone section into the response and setting QDCOUNT/ZOCOUNT accordingly (or using a real DNS message builder for UPDATE responses).

Suggested change
let mut response = vec![0u8; DNS_HEADER_LEN];
response[0] = query_data[0];
response[1] = query_data[1];
let opcode_bits = query_data[2] & 0x78;
let rd_bit = query_data[2] & 0x01;
response[2] = 0x80 | opcode_bits | rd_bit;
// Start from the full query so that the zone/question section and all
// counts are preserved, and then patch only the header fields that differ.
let mut response = query_data.to_vec();
// Copy transaction ID.
response[0] = query_data[0];
response[1] = query_data[1];
// Preserve OPCODE and RD from the query; set QR=1 (response).
let opcode_bits = query_data[2] & 0x78;
let rd_bit = query_data[2] & 0x01;
response[2] = 0x80 | opcode_bits | rd_bit;
// Preserve high nibble of flags from the query; set low nibble to RCODE.

Copilot uses AI. Check for mistakes.
Comment on lines +393 to +406
async fn bump_zone_serial(zone: &Zone) -> Result<i32, UpdateError> {
let zone_repo = get_zone_repository();
let new_serial = generate_serial(zone.serial);

zone_repo
.update(Zone {
serial: new_serial,
..zone.clone()
})
.await
.map_err(|e| UpdateError::Internal(format!("failed to update zone serial: {}", e)))?;

Ok(new_serial)
}
Copy link

Copilot AI Mar 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bump_zone_serial updates the zone row unconditionally with a serial derived from the previously read value. Concurrent updates can race and write the same serial (or overwrite a newer serial), causing non-monotonic serial history and incorrect IXFR behavior. Consider using optimistic locking (e.g., UPDATE ... WHERE id=? AND serial=? and retry on 0 rows affected) or a DB-side serial bump to guarantee monotonicity under concurrency.

Copilot uses AI. Check for mistakes.
Comment on lines +156 to +203
let zone_records = record_repo
.get_by_zone_id(zone.id)
.await
.map_err(|e| UpdateError::Internal(format!("failed to load records: {}", e)))?;

let target_type = if update.rr_type == TYPE_ANY {
None
} else {
Some(rr_type_to_record_type(update.rr_type)?)
};

let (target_value, target_priority) = if is_rrset_delete || update.rdata.is_empty() {
(None, None)
} else {
let (_, value, priority) = rr_to_record_value(update)?;
(Some(value), priority)
};

let mut matched: Vec<Record> = Vec::new();
for record in zone_records {
if !record.name.eq_ignore_ascii_case(&relative_name) {
continue;
}

if let Some(ref typ) = target_type
&& &record.record_type != typ
{
continue;
}

if let Some(ref value) = target_value
&& !record.value.eq_ignore_ascii_case(value)
{
continue;
}

if let Some(pri) = target_priority
&& record.priority != Some(pri)
{
continue;
}

if record.record_type == RecordType::SOA {
continue;
}

matched.push(record);
}
Copy link

Copilot AI Mar 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

delete_records loads all records for the zone into memory and filters them in Rust. For large zones this can be slow and memory-heavy, and it also increases DB load for frequent updates. Consider adding repository methods to query/delete by (zone_id, name, type, value, priority) so deletes can be done with targeted SQL instead of full scans.

Copilot uses AI. Check for mistakes.
Comment on lines +44 to +87
pub async fn apply_update(
request: UpdateRequest,
client_addr: SocketAddr,
) -> Result<UpdateResult, UpdateError> {
validate_acl(client_addr.ip())?;

let zone_name = trim_dot(&request.zone_name);
if zone_name.is_empty() {
return Err(UpdateError::NotZone(
"root zone is not supported".to_string(),
));
}

let zone_repo = get_zone_repository();
let zone = zone_repo
.get_by_name(zone_name)
.await
.map_err(|e| UpdateError::Internal(format!("failed to load zone: {}", e)))?
.ok_or_else(|| UpdateError::NotZone(format!("zone '{}' not found", zone_name)))?;

let mut changed = false;

for update in &request.updates {
let this_changed = apply_single_update(&zone, update).await?;
changed = changed || this_changed;
}

if changed {
let new_serial = bump_zone_serial(&zone).await?;
save_zone_snapshot(&zone, new_serial).await?;

if let Err(e) = dns::xfr::notify::send_notify(Some(&zone.name)).await {
log_error!("NSUPDATE notify failed for zone {}: {}", zone.name, e);
}

log_info!(
"NSUPDATE committed for zone {} with serial {}",
zone.name,
new_serial
);
}

Ok(UpdateResult::Applied { changed })
}
Copy link

Copilot AI Mar 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are no tests covering the new nsupdate parsing/apply logic (opcode detection, prerequisite handling, ACL behavior, add vs delete semantics, serial bump + snapshot + zone_changes consistency). Since this is protocol-facing and mutates DB state, adding unit/integration tests will help prevent regressions and validate RFC 2136 edge cases.

Copilot uses AI. Check for mistakes.
Comment on lines +380 to +391
fn validate_acl(client_ip: IpAddr) -> Result<(), UpdateError> {
let allowed_ips = acl::nsupdate_allowed_ips_from_config();

if acl::is_client_allowed(client_ip, &allowed_ips) {
Ok(())
} else {
Err(UpdateError::Refused(format!(
"IP {} not allowed",
client_ip
)))
}
}
Copy link

Copilot AI Mar 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

validate_acl allows updates when dns.nsupdate_allowed_ips is empty because is_client_allowed treats an empty ACL as allow-all. For dynamic updates this is a dangerous default (it makes nsupdate open to the world unless explicitly configured). Consider making nsupdate opt-in by refusing when the allowed list is empty, or adding a separate is_client_allowed_or_deny_if_empty path for nsupdate.

Copilot uses AI. Check for mistakes.
Comment on lines +44 to +87
pub async fn apply_update(
request: UpdateRequest,
client_addr: SocketAddr,
) -> Result<UpdateResult, UpdateError> {
validate_acl(client_addr.ip())?;

let zone_name = trim_dot(&request.zone_name);
if zone_name.is_empty() {
return Err(UpdateError::NotZone(
"root zone is not supported".to_string(),
));
}

let zone_repo = get_zone_repository();
let zone = zone_repo
.get_by_name(zone_name)
.await
.map_err(|e| UpdateError::Internal(format!("failed to load zone: {}", e)))?
.ok_or_else(|| UpdateError::NotZone(format!("zone '{}' not found", zone_name)))?;

let mut changed = false;

for update in &request.updates {
let this_changed = apply_single_update(&zone, update).await?;
changed = changed || this_changed;
}

if changed {
let new_serial = bump_zone_serial(&zone).await?;
save_zone_snapshot(&zone, new_serial).await?;

if let Err(e) = dns::xfr::notify::send_notify(Some(&zone.name)).await {
log_error!("NSUPDATE notify failed for zone {}: {}", zone.name, e);
}

log_info!(
"NSUPDATE committed for zone {} with serial {}",
zone.name,
new_serial
);
}

Ok(UpdateResult::Applied { changed })
}
Copy link

Copilot AI Mar 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

apply_update performs multiple DB mutations (record creates/deletes, zone_changes inserts, zone serial update, snapshot upsert) without a transaction. If any step fails mid-request, the zone can be left in a partially-updated state (records changed but serial/snapshot not updated, or vice versa), which will break IXFR/secondary sync. Consider wrapping the whole update in a DB transaction and committing only after all steps succeed.

Copilot uses AI. Check for mistakes.
Comment on lines +443 to +475
async fn log_zone_change(
zone_id: i32,
operation: &str,
name: &str,
record_type: &RecordType,
value: &str,
ttl: Option<i32>,
priority: Option<i32>,
) -> Result<(), UpdateError> {
let zone_repo = get_zone_repository();
let zone = zone_repo
.get_by_id(zone_id)
.await
.map_err(|e| UpdateError::Internal(format!("failed to load zone for change: {}", e)))?
.ok_or_else(|| UpdateError::Internal(format!("zone {} not found", zone_id)))?;

let serial = generate_serial(zone.serial);

let change_repo = get_zone_change_repository();
change_repo
.create(ZoneChange {
id: 0,
zone_id,
serial,
operation: operation.to_string(),
record_name: name.to_string(),
record_type: record_type.to_string(),
record_value: value.to_string(),
record_ttl: ttl,
record_priority: priority,
})
.await
.map_err(|e| UpdateError::Internal(format!("failed to log zone change: {}", e)))?;
Copy link

Copilot AI Mar 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

log_zone_change recomputes the serial by reloading the zone and calling generate_serial(zone.serial) for each individual change. Under concurrency this can produce zone_changes serials that don’t match the serial ultimately written by bump_zone_serial, and even within a single request different changes could end up with different serials. Prefer generating the new serial once per update request and passing it into log_zone_change so all changes and the zone serial stay consistent.

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 26 out of 32 changed files in this pull request and generated 7 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +73 to +90
pub fn parse_update_request(data: &[u8]) -> Result<UpdateRequest, ParseError> {
if data.len() < DNS_HEADER_LEN {
return Err(ParseError::TooShort);
}

let opcode = (data[2] >> 3) & 0x0f;
if opcode != 5 {
return Err(ParseError::InvalidOpcode);
}

let qdcount = u16::from_be_bytes([data[4], data[5]]) as usize;
let ancount = u16::from_be_bytes([data[6], data[7]]) as usize;
let nscount = u16::from_be_bytes([data[8], data[9]]) as usize;
let arcount = u16::from_be_bytes([data[10], data[11]]) as usize;

if qdcount != 1 {
return Err(ParseError::InvalidHeader);
}
Copy link

Copilot AI Mar 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NSUPDATE parsing/TSIG verification is protocol-sensitive (name compression, section counts, edge-case bounds). There are unit tests for related DNS wire helpers in src/dns/xfr/wire.rs, but this new parser/auth path has no tests. Adding a few fixtures (valid update, compressed-name update, TSIG-signed update, malformed packets) would help prevent regressions and validate interoperability with nsupdate.

Copilot uses AI. Check for mistakes.
Comment on lines +108 to +144
async fn add_record(
zone: &Zone,
owner_name: &str,
update: &UpdateRecord,
) -> Result<bool, UpdateError> {
let (record_type, value, priority) = rr_to_record_value(update)?;

let relative_name = absolute_to_relative(owner_name, &zone.name)?;

let record_repo = get_record_repository();
let created = record_repo
.create(Record {
id: 0,
name: relative_name.clone(),
record_type: record_type.clone(),
value: value.clone(),
ttl: Some(update.ttl as i32),
priority,
zone_id: zone.id,
created_at: Utc::now(),
})
.await
.map_err(|e| UpdateError::Internal(format!("failed to create record: {}", e)))?;

log_zone_change(
zone.id,
"ADD",
&created.name,
&record_type,
&value,
created.ttl,
created.priority,
)
.await?;

Ok(true)
}
Copy link

Copilot AI Mar 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add_record always inserts a new DB row and reports changed=true without checking whether an identical RR already exists. Since the records table has no uniqueness constraints, repeated nsupdate add ... calls will create duplicates and can diverge from expected RFC2136 behavior (adding an existing RR should be a no-op). Also, this bypasses the validation rules enforced by the HTTP API (e.g., CNAME conflicts, NS-at-apex/glue checks), allowing invalid zone states to be created via nsupdate. Suggest reusing the same validation logic and treating existing identical RRs as unchanged.

Copilot uses AI. Check for mistakes.
Comment on lines +72 to +73
let this_changed = apply_single_update(&zone, update).await?;
changed = changed || this_changed;
Copy link

Copilot AI Mar 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updates are applied incrementally to the database inside the for update in &request.updates loop. If a later update fails, earlier record creates/deletes (and zone_change rows) remain committed but the server returns an error rcode, and the zone serial/snapshot is not bumped—leaving the zone in a partially-updated state. For RFC2136-style updates this should be atomic per request; consider running the whole update in a single DB transaction (records + zone_changes + zone serial + snapshot), or at least pre-validate all updates before mutating state so you can fail without side effects.

Suggested change
let this_changed = apply_single_update(&zone, update).await?;
changed = changed || this_changed;
match apply_single_update(&zone, update).await {
Ok(this_changed) => {
changed = changed || this_changed;
}
Err(err) => {
// If any prior updates have already changed the zone, ensure we
// still bump the serial and take a snapshot so that the on-disk
// state is consistent with the applied changes, even though the
// overall update request failed.
if changed {
match bump_zone_serial(&zone).await {
Ok(new_serial) => {
if let Err(e) = save_zone_snapshot(&zone, new_serial).await {
log_error!(
"NSUPDATE snapshot failed for zone {} after partial update: {}",
zone.name,
e
);
} else {
if let Err(e) =
dns::xfr::notify::send_notify(Some(&zone.name)).await
{
log_error!(
"NSUPDATE notify failed for zone {} after partial update: {}",
zone.name,
e
);
}
log_info!(
"NSUPDATE committed partial update for zone {} with serial {}",
zone.name,
new_serial
);
}
}
Err(e) => {
log_error!(
"NSUPDATE failed to bump serial for zone {} after partial update: {}",
zone.name,
e
);
}
}
}
return Err(err);
}
}

Copilot uses AI. Check for mistakes.
Comment on lines +126 to +134
let mut response = vec![0u8; DNS_HEADER_LEN];

response[0] = query_data[0];
response[1] = query_data[1];

let opcode_bits = query_data[2] & 0x78;
let rd_bit = query_data[2] & 0x01;

response[2] = 0x80 | opcode_bits | rd_bit;
Copy link

Copilot AI Mar 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

build_response returns only a 12-byte header with all section counts left as 0 and does not include a TSIG record in the response. When TSIG is used, clients typically require a signed TSIG response; without it nsupdate may treat the operation as failed even if the server applied it. Consider constructing a proper DNS UPDATE response (copying the zone section / setting counts appropriately) and generating a TSIG response when the request was TSIG-signed.

Suggested change
let mut response = vec![0u8; DNS_HEADER_LEN];
response[0] = query_data[0];
response[1] = query_data[1];
let opcode_bits = query_data[2] & 0x78;
let rd_bit = query_data[2] & 0x01;
response[2] = 0x80 | opcode_bits | rd_bit;
// Start from the full query so we preserve the zone section and any
// additional records (including TSIG) in the response.
let mut response = query_data.to_vec();
// Copy ID from query.
response[0] = query_data[0];
response[1] = query_data[1];
// Preserve opcode and RD bit, set QR bit for response.
let opcode_bits = query_data[2] & 0x78;
let rd_bit = query_data[2] & 0x01;
response[2] = 0x80 | opcode_bits | rd_bit;
// Preserve upper flag bits, set RCODE in lower 4 bits.

Copilot uses AI. Check for mistakes.
.to_string();

if secret.is_empty() {
return Ok(());
Copy link

Copilot AI Mar 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

validate_tsig returns Ok(()) when dns.nsupdate_tsig_key is empty, meaning NSUPDATE is accepted without any authentication by default (and the sample config sets this key to ""). That enables unauthenticated remote mutation of DNS records, which is a high-risk default. Consider requiring an explicit enable flag / allowed IP ACL for nsupdate, or refusing updates when the key is unset (and logging a clear warning if you keep the current behavior).

Suggested change
return Ok(());
return Err(UpdateError::Refused(
"TSIG key is not configured; refusing unauthenticated nsupdate".to_string(),
));

Copilot uses AI. Check for mistakes.
Comment on lines +434 to +444
let zone_repo = get_zone_repository();
let zone = zone_repo
.get_by_id(zone_id)
.await
.map_err(|e| UpdateError::Internal(format!("failed to load zone for change: {}", e)))?
.ok_or_else(|| UpdateError::Internal(format!("zone {} not found", zone_id)))?;

let serial = generate_serial(zone.serial);

let change_repo = get_zone_change_repository();
change_repo
Copy link

Copilot AI Mar 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

log_zone_change re-loads the zone from the database and re-computes a serial for every single RR change. For multi-record updates this becomes an avoidable N+1 query pattern and also risks serial inconsistencies if the zone serial changes concurrently. Consider computing the serial once per update request and passing it into log_zone_change (or batching zone_change inserts) so all changes share the same serial and only one zone fetch is needed.

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 47 out of 52 changed files in this pull request and generated 1 comment.

Comments suppressed due to low confidence (1)

src/dns/xfr/notify.rs:58

  • send_notify_for_zone now reads from dns.secondary_addrs. Since the NOTIFY sender expects this config to contain ip:port values (it parses as SocketAddr), please ensure docs/fixtures consistently include ports (e.g., 127.0.0.1:53) or adjust parsing to accept bare IPs to avoid NOTIFY becoming a no-op with IP-only configs.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 20 to +22
listen_port = 53
secondary_addrs = "127.0.0.1"
nsupdate_tsig_key = ""
Copy link

Copilot AI Mar 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this fixture, secondary_addrs is set to an IP without a port. The NOTIFY sender parses dns.secondary_addrs as SocketAddr (ip:port) and will reject bare IPs, so tests using this fixture may unintentionally skip NOTIFY behavior. Consider changing this to 127.0.0.1:53 (or updating NOTIFY parsing to accept IP-only entries).

Copilot uses AI. Check for mistakes.
@kweonminsung kweonminsung requested review from Copilot and removed request for Copilot March 24, 2026 20:20
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 47 out of 52 changed files in this pull request and generated 3 comments.

Comments suppressed due to low confidence (1)

src/dns/xfr/notify.rs:78

  • dns.secondary_addrs is parsed as SocketAddr (requiring an explicit port), but elsewhere (DNS ACL parsing) the same config allows bare IPs. This inconsistency means NOTIFY will be skipped for entries like 127.0.0.1 even though the server ACL accepts them. Consider accepting IP-only entries here by defaulting the port to 53 (or updating config/docs/tests to always include a port).

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +121 to +136
fn build_response(query_data: &[u8], rcode: u8) -> Option<Vec<u8>> {
if query_data.len() < DNS_HEADER_LEN {
return None;
}

let mut response = vec![0u8; DNS_HEADER_LEN];

response[0] = query_data[0];
response[1] = query_data[1];

let opcode_bits = query_data[2] & 0x78;
let rd_bit = query_data[2] & 0x01;

response[2] = 0x80 | opcode_bits | rd_bit;
response[3] = (query_data[3] & 0xF0) | (rcode & 0x0F);

Copy link

Copilot AI Mar 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NSUPDATE builds a response containing only the 12-byte DNS header. Many clients (including nsupdate) expect the Zone/Question section (QDCOUNT=1) to be present in the response, mirroring the request’s zone section; returning a header-only message can be treated as FORMERR or cause the client to hang/retry. Consider constructing a proper UPDATE response by copying the original ID/opcode and including the original zone section (and setting the header counts appropriately), even if prerequisite/update sections are empty.

Suggested change
fn build_response(query_data: &[u8], rcode: u8) -> Option<Vec<u8>> {
if query_data.len() < DNS_HEADER_LEN {
return None;
}
let mut response = vec![0u8; DNS_HEADER_LEN];
response[0] = query_data[0];
response[1] = query_data[1];
let opcode_bits = query_data[2] & 0x78;
let rd_bit = query_data[2] & 0x01;
response[2] = 0x80 | opcode_bits | rd_bit;
response[3] = (query_data[3] & 0xF0) | (rcode & 0x0F);
/// Returns the end offset (exclusive) of the first question/zone section in the DNS message.
/// The offset is measured from the start of the message buffer.
fn zone_section_end(message: &[u8]) -> Option<usize> {
if message.len() < DNS_HEADER_LEN {
return None;
}
let mut offset = DNS_HEADER_LEN;
// Parse QNAME: sequence of labels ending with zero length,
// or a compression pointer (two bytes with top two bits set).
loop {
if offset >= message.len() {
return None;
}
let len = message[offset];
// Compression pointer: 11xxxxxx in the first byte, followed by one more byte.
if (len & 0xC0) == 0xC0 {
if offset + 1 >= message.len() {
return None;
}
offset += 2;
break;
}
// End of QNAME.
if len == 0 {
offset += 1;
break;
}
// Regular label.
let label_len = len as usize;
offset += 1 + label_len;
if offset > message.len() {
return None;
}
}
// After QNAME, there must be QTYPE (2 bytes) and QCLASS (2 bytes).
if offset + 4 > message.len() {
return None;
}
Some(offset + 4)
}
fn build_response(query_data: &[u8], rcode: u8) -> Option<Vec<u8>> {
if query_data.len() < DNS_HEADER_LEN {
return None;
}
// Extract QDCOUNT from the query.
let qdcount = ((query_data[4] as u16) << 8) | (query_data[5] as u16);
// Try to determine the end of the first question/zone section if present.
let zone_end = if qdcount > 0 {
zone_section_end(query_data)
} else {
None
};
// If we cannot determine the zone section, fall back to a header-only response.
if zone_end.is_none() {
let mut response = vec![0u8; DNS_HEADER_LEN];
// Copy ID.
response[0] = query_data[0];
response[1] = query_data[1];
// Preserve opcode and RD bit from the query, set QR=1, clear other flags.
let opcode_bits = query_data[2] & 0x78;
let rd_bit = query_data[2] & 0x01;
response[2] = 0x80 | opcode_bits | rd_bit;
// Preserve upper flag bits in byte 3, set RCODE.
response[3] = (query_data[3] & 0xF0) | (rcode & 0x0F);
// Leave counts as zero (header-only response).
return Some(response);
}
let zone_end = zone_end.unwrap();
// Allocate space for header plus the original zone/question section.
let mut response = vec![0u8; zone_end];
// Copy ID.
response[0] = query_data[0];
response[1] = query_data[1];
// Preserve opcode and RD bit from the query, set QR=1, clear other flags.
let opcode_bits = query_data[2] & 0x78;
let rd_bit = query_data[2] & 0x01;
response[2] = 0x80 | opcode_bits | rd_bit;
// Preserve upper flag bits in byte 3, set RCODE.
response[3] = (query_data[3] & 0xF0) | (rcode & 0x0F);
// Set header counts: QDCOUNT = 1, others = 0.
response[4] = 0x00;
response[5] = 0x01; // QDCOUNT
response[6] = 0x00;
response[7] = 0x00; // ANCOUNT
response[8] = 0x00;
response[9] = 0x00; // NSCOUNT
response[10] = 0x00;
response[11] = 0x00; // ARCOUNT
// Copy the original zone/question section from the query.
response[DNS_HEADER_LEN..zone_end]
.copy_from_slice(&query_data[DNS_HEADER_LEN..zone_end]);

Copilot uses AI. Check for mistakes.
Comment on lines +430 to +457
async fn log_zone_change(
zone_id: i32,
operation: &str,
name: &str,
record_type: &RecordType,
value: &str,
ttl: Option<i32>,
priority: Option<i32>,
) -> Result<(), UpdateError> {
let zone = RepositoryService::get_zone_by_id(zone_id)
.await
.map_err(|e| UpdateError::Internal(format!("failed to load zone for change: {}", e)))?
.ok_or_else(|| UpdateError::Internal(format!("zone {} not found", zone_id)))?;

let serial = generate_serial(zone.serial);

RepositoryService::create_zone_change(ZoneChange {
id: 0,
zone_id,
serial,
operation: operation.to_string(),
record_name: name.to_string(),
record_type: record_type.to_string(),
record_value: value.to_string(),
record_ttl: ttl,
record_priority: priority,
})
.await
Copy link

Copilot AI Mar 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

log_zone_change reloads the zone and derives a new serial with generate_serial(zone.serial) independently of bump_zone_serial. Under concurrent updates this can produce zone-change rows with a serial that doesn’t match the serial ultimately written to the zone/snapshot (e.g., another request bumps serial between the reads), which can break IXFR history queries. To keep history consistent, compute a single new_serial once per UPDATE request and use that same value for all ZoneChange rows, the zone serial bump, and the snapshot (ideally within a DB transaction or with a compare-and-swap update).

Copilot uses AI. Check for mistakes.
Comment on lines +130 to +139
let created = RepositoryService::create_record(Record {
id: 0,
name: relative_name.clone(),
record_type: record_type.clone(),
value: value.clone(),
ttl: Some(update.ttl as i32),
priority,
zone_id: zone.id,
created_at: Utc::now(),
})
Copy link

Copilot AI Mar 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ttl: Some(update.ttl as i32) truncates a u32 TTL into i32 without validation. Values > i32::MAX will wrap to negative TTLs in the DB, and later casts back to u32 (e.g., in XFR encoding) can produce huge TTLs. Consider validating the TTL range before casting and returning REFUSED/BADREQUEST for out-of-range TTLs.

Copilot uses AI. Check for mistakes.
@kweonminsung
Copy link
Copy Markdown
Owner Author

@copilot open a new pull request to apply changes based on the comments in this thread

Copy link
Copy Markdown
Contributor

Copilot AI commented Mar 24, 2026

@kweonminsung I've opened a new pull request, #75, to work on those changes. Once the pull request is ready, I'll request review from you.

Copilot AI and others added 3 commits March 24, 2026 20:38
- build_response now copies the zone section into the response and sets
  QDCOUNT=1 so clients don't treat the reply as malformed
- Compute new_serial once per apply_update request and thread it through
  the call chain; removes per-change zone reload in log_zone_change
- Validate u32 TTL <= i32::MAX in add_record before casting to avoid
  negative TTLs being stored in the database

Co-authored-by: kweonminsung <79230043+kweonminsung@users.noreply.github.com>
Agent-Logs-Url: https://github.com/kweonminsung/bindizr/sessions/38cc6a44-4082-40f0-b5df-4e45ff363f4c
fix(nsupdate): proper UPDATE response, serial consistency, TTL overflow guard
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 47 out of 52 changed files in this pull request and generated 4 comments.

Comments suppressed due to low confidence (1)

src/service/record.rs:129

  • The error message for CNAME conflicts says "A record with name ... already exists", but this check is for any existing RRset at that owner name. This can confuse clients (it reads like an A record specifically). Consider rewording to something like "A record already exists"/"Records already exist" or "an RRset already exists" for that owner name.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +161 to +165
fn classify_query_route(query_data: &[u8]) -> Result<QueryRoute, String> {
if nsupdate::is_nsupdate(query_data) {
return Ok(QueryRoute::Nsupdate);
}

Copy link

Copilot AI Mar 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NSUPDATE handling is routed purely by opcode and doesn’t apply any IP-based ACL before accepting/processing the request. With the default dns.nsupdate_tsig_key = "" (which disables TSIG checks), this makes dynamic updates unauthenticated and reachable from anywhere the DNS port is exposed. Consider enforcing TSIG by default, or adding a separate dns.nsupdate_allowed_ips/reusing the existing secondary ACL to restrict who can send updates when TSIG is disabled.

Copilot uses AI. Check for mistakes.
Comment on lines +78 to +83
fn decode_tsig_secret(raw: &str) -> Vec<u8> {
match base64::engine::general_purpose::STANDARD.decode(raw) {
Ok(bytes) if !bytes.is_empty() => bytes,
_ => raw.as_bytes().to_vec(),
}
}
Copy link

Copilot AI Mar 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

decode_tsig_secret silently falls back to using the raw (non-base64) bytes if base64 decoding fails. This can turn a misconfigured base64 secret into a different key without any visibility, leading to hard-to-debug auth failures. Consider requiring valid base64 (and returning/logging a configuration error) or at least logging a warning when falling back.

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 47 out of 52 changed files in this pull request and generated 3 comments.

Comments suppressed due to low confidence (1)

src/service/record.rs:129

  • This error message says "A record with name ... already exists" but the collision could be with any record type (e.g., TXT/MX/etc.). Consider rewording to something record-type-agnostic (e.g., "A record with owner name ..." or "An RRset with name ...") to avoid misleading clients.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

[dns]
listen_port = 53 # DNS server listen port (UDP and TCP)
secondary_addrs = "" # Comma-separated secondary DNS server addresses for NOTIFY (e.g., "192.168.1.2:53,192.168.1.3:53")
nsupdate_tsig_key = "" # Shared TSIG secret for nsupdate authentication (empty to disable, base64 recommended)
Copy link

Copilot AI Mar 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The sample config sets nsupdate_tsig_key = "" (disables TSIG validation). If listen_addr is ever changed off-loopback, this makes dynamic updates unauthenticated by default. Consider changing the shipped example to require a non-empty TSIG key (or clearly warning that leaving it empty enables unauthenticated updates) to avoid insecure deployments.

Suggested change
nsupdate_tsig_key = "" # Shared TSIG secret for nsupdate authentication (empty to disable, base64 recommended)
nsupdate_tsig_key = "CHANGE_ME_TSIG_BASE64" # REQUIRED: per-deployment TSIG secret for nsupdate authentication (base64 recommended). Setting this to "" DISABLES AUTH and is insecure if listen_addr is not 127.0.0.1.

Copilot uses AI. Check for mistakes.
Comment on lines +123 to +133
validate_nsupdate_add_constraints(zone, &relative_name, &record_type, &value)
.await
.map_err(|e| UpdateError::Refused(e.to_string()))?;

if find_identical_record_in_zone(zone.id, &relative_name, &record_type, &value, priority)
.await
.map_err(|e| UpdateError::Internal(e.to_string()))?
.is_some()
{
return Ok(false);
}
Copy link

Copilot AI Mar 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add_record currently loads the zone’s records at least twice per update (once inside validate_nsupdate_add_constraints and again inside find_identical_record_in_zone). On multi-record updates this adds avoidable DB roundtrips. Consider reusing a single in-memory record list for both validation + duplicate detection, or adding a repository query that checks for an identical record without fetching the full zone twice.

Copilot uses AI. Check for mistakes.

Ok(UpdateResult::Applied { changed })
}

Copy link

Copilot AI Mar 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The NSUPDATE apply path (TSIG validation, prerequisite evaluation, record add/delete semantics, serial bumping, change logging, and NOTIFY) is security- and correctness-critical but currently has no direct tests in this module. Adding unit/integration tests for key scenarios (e.g., TSIG required/invalid, prerequisite failures mapping to the right RCODE, idempotent add/delete, and serial/zone_change updates) would help prevent regressions.

Suggested change
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn trim_dot_strips_trailing_dot_from_fqdn() {
let name_with_dot = "example.com.";
let name_without_dot = "example.com";
assert_eq!(trim_dot(name_with_dot), "example.com");
assert_eq!(trim_dot(name_without_dot), "example.com");
}
#[test]
fn trim_dot_handles_root_and_empty_names() {
// A single dot represents the root; our helper should yield an empty string,
// which `apply_update` subsequently treats as "root zone is not supported".
assert_eq!(trim_dot("."), "");
// An already-empty name should remain empty.
assert_eq!(trim_dot(""), "");
}
}

Copilot uses AI. Check for mistakes.
@kweonminsung kweonminsung merged commit cae9991 into develop Mar 24, 2026
5 of 6 checks passed
@kweonminsung kweonminsung deleted the 72-feature-add-nsupdate-support branch March 24, 2026 21:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants