View Issue Details
ID | Project | Category | View Status | Date Submitted | Last Update |
---|---|---|---|---|---|
0015426 | CentOS-7 | -OTHER | public | 2018-11-01 01:36 | 2018-11-16 00:06 |
Reporter | zimmertr | ||||
Priority | normal | Severity | minor | Reproducibility | always |
Status | confirmed | Resolution | open | ||
Product Version | 7.5.1804 | ||||
Target Version | Fixed in Version | ||||
Summary | 0015426: NameServer baked into /etc/resolv.conf on qcow2 images | ||||
Description | I was directed to post here from the Forums from this post. If necessary, you can find additional information about this bug there: https://www.centos.org/forums/viewtopic.php?f=47&t=68645 The qcow2 images of CentOS 7 listed at this link contain a nameserver in `/etc/resolv.conf` by default: https://cloud.centos.org/centos/7/images/?C=M;O=D Why is this nameserver baked into an official qcow2 image? Is this a bug? If not, this effectively breaks Ansible as this DNS server may not exist on a network. And, since [i]sshd[/i] is configured to use the [i]UseDNS yes[/i] directive by default, it attempts to perform a reverse NS lookup against the workstation running Ansible when it connects to the server. Using that DNS server, which does not exist. Effectively causing Ansible to timeout. However, I can confirm it also exists in these images: CentOS-7-x86_64-GenericCloud.qcow2c CentOS-7-x86_64-GenericCloud.qcow2 CentOS-7-x86_64-GenericCloud-1809.qcow2 CentOS-7-x86_64-GenericCloud-1808.qcow2 | ||||
Steps To Reproduce | There are several ways that one might discover this issue. The most straightforward is to download and mount the image on your workstation and browse to `/etc/resolv.conf` manually. ``` modprobe nbd max_part=8 qemu-nbd --connect=/dev/nbd0 /tmp/CentOS7.qcow2c fdisk -l /dev/nbd0 mount /dev/nbd0p1 /mnt/tmp cat /mnt/tmp/etc/resolv.conf ``` Where, the results of `cat` will display: ``` # Generated by NetworkManager nameserver 10.0.2.3 ``` I, personally, discovered it while trying to use `qm` on my Proxmox server to deploy some VMs to QEMU. ``` qm create 170 --cores 2 --memory 4096 --net0 "virtio,bridge=vmbr0" --ipconfig0 "gw=192.168.1.1,ip=192.168.1.170/24" --nameserver 192.168.1.10 --searchdomain "sol.milkyway" --sshkeys /root/.ssh/sol.milkyway.kubernetes.pub wget https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2c -O /tmp/CentOS7.qcow2c qm importdisk 170 /tmp/CentOS7.qcow2c Proxmox_lvm-thin qm set 170 --scsihw virtio-scsi-pci --scsi0 Proxmox_lvm-thin:vm-170-disk-0 qm resize 170 scsi0 50G qm set 170 --ide2 Proxmox_lvm-thin:cloudinit qm set 170 --boot c --bootdisk scsi0 ``` Where, like above, you'll find that, in addition to my specified nameserver (192.168.1.10), an entry exists before for `10.0.2.3`. Causing it to be prioritized for ns lookups. ``` [centos@VM170 ~]$ cat /etc/resolv.conf ; Created by cloud-init on instance boot automatically, do not edit. ; # Generated by NetworkManager nameserver 10.0.2.3 nameserver 192.168.1.10 search sol.milkyway ``` | ||||
Additional Information | Before getting to this point, I posted in several other locations as well. Further discussion can be found here in order of posting: https://serverfault.com/questions/937465/ansible-fails-at-gathering-hosts-presumably-because-ssh-is-slow-to-connect-se https://forum.proxmox.com/threads/seeking-support-for-using-qm-create-and-specifying-a-bridge.48316/ https://reddit.com/r/homelab/comments/9s9bcc/proxmoxqemu_question_10023_dns_server/ | ||||
Tags | qemu | ||||
abrt_hash | |||||
URL | |||||
I am also running into the issue, more specifically with cloud-init. cloud-init doesn't remove existing nameservers, just appends. Now this isn't a big deal for me as that nameserver simply gets ignored since it doesn't respond however it is stil la bug and should be fixed in the cloud images. | |
I would assume it would be a big deal for you regardless of cloud-init because sshd enforces reverse NS lookups against the connecting host by default. Since this name server exists, and likely doesn't exist on your network, this reverse NS lookup will take upwards of a minute to timeout. Effectively causing your SSH connection attempt to take a minute rather than a few seconds. | |