0

I run a LXC container on my "Ubuntu 23.04" VM by UTM on my mac laptop but I cannot ping 8.8.8.8 in my LXC container.

I downloaded lxc via snap

note: I have search many info to fix the problems, it still not work
I can ping my LXC container on my VM and vice versa.
I can ping my LXC container's default getway on my LXC container.
I have done systemctl restart the snap service or set ipv4 and firewall to false and true , but the problem still exits.

My LXC container info:

lxc config show ubuntu --expanded
architecture: aarch64
config:
  image.architecture: arm64
  image.description: Ubuntu focal arm64 (20240118_07:42)
  image.os: Ubuntu
  image.release: focal
  image.serial: "20240118_07:42"
  image.type: squashfs
  image.variant: default
  volatile.base_image: 2c855bd13a6d33ff3ea6a9adcf9f6454da4314cd4629d988c98b7da91e00eb09
  volatile.cloud-init.instance-id: 57f1f7c4-33fb-40cb-9bc3-9f4a75bc881c
  volatile.eth0.host_name: veth84f5e9b3
  volatile.eth0.hwaddr: 00:16:3e:d8:ca:72
  volatile.idmap.base: "0"
  volatile.idmap.current: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":1000000,"Nsid":0,"Maprange":1000000000},{"Isuid":false,"Isgid":true,"Hostid":1000000,"Nsid":0,"Maprange":1000000000}]'
  volatile.last_state.power: RUNNING
  volatile.uuid: 0c87c07d-4693-4344-bc7d-cb5d95f1747e
  volatile.uuid.generation: 0c87c07d-4693-4344-bc7d-cb5d95f1747e
devices:
  eth0:
    name: eth0
    network: lxdbr0
    type: nic
  root:
    path: /
    pool: default
    type: disk
ephemeral: false
profiles:
 default
stateful: false
description: ""
lxc exec ubuntu -- ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
26: eth0@if27: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:d8:ca:72 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.4.135.180/24 brd 10.4.135.255 scope global dynamic eth0
       valid_lft 1989sec preferred_lft 1989sec
    inet6 fd42:e368:9853:dcf7:216:3eff:fed8:ca72/64 scope global mngtmpaddr noprefixroute 
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fed8:ca72/64 scope link 
       valid_lft forever preferred_lft forever
lxc exec ubuntu -- ip r
default via 10.4.135.1 dev eth0 proto dhcp src 10.4.135.180 metric 100 
10.4.135.0/24 dev eth0 proto kernel scope link src 10.4.135.180 
10.4.135.1 dev eth0 proto dhcp scope link src 10.4.135.180 metric 100

ip route get in container:

lxc exec ubuntu -- ip route get 8.8.8.8
8.8.8.8 via 10.4.135.1 dev eth0 src 10.4.135.180 uid 0 
    cache

My VM info:

ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp0s1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether d6:ae:f7:20:02:a4 brd ff:ff:ff:ff:ff:ff
    inet 192.168.64.3/24 metric 100 brd 192.168.64.255 scope global dynamic enp0s1
       valid_lft 72951sec preferred_lft 72951sec
    inet6 fdbd:2af3:625b:81be:d4ae:f7ff:fe20:2a4/64 scope global dynamic mngtmpaddr noprefixroute 
       valid_lft 2591887sec preferred_lft 604687sec
    inet6 fe80::d4ae:f7ff:fe20:2a4/64 scope link 
       valid_lft forever preferred_lft forever
3: br-251cc27d7151: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:59:7f:0b:21 brd ff:ff:ff:ff:ff:ff
    inet 172.19.0.1/16 brd 172.19.255.255 scope global br-251cc27d7151
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:23:b4:8c:aa brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
5: br-df1c5081bc7d: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:34:97:a8:d8 brd ff:ff:ff:ff:ff:ff
    inet 172.18.0.1/16 brd 172.18.255.255 scope global br-df1c5081bc7d
       valid_lft forever preferred_lft forever
22: lxcbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 00:16:3e:00:00:00 brd ff:ff:ff:ff:ff:ff
    inet 10.0.3.1/24 brd 10.0.3.255 scope global lxcbr0
       valid_lft forever preferred_lft forever
25: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:6e:79:97 brd ff:ff:ff:ff:ff:ff
    inet 10.4.135.1/24 scope global lxdbr0
       valid_lft forever preferred_lft forever
    inet6 fd42:e368:9853:dcf7::1/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fe6e:7997/64 scope link 
       valid_lft forever preferred_lft forever
27: veth84f5e9b3@if26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master lxdbr0 state UP group default qlen 1000
    link/ether a2:36:90:15:0e:ee brd ff:ff:ff:ff:ff:ff link-netnsid 0
ip r
default via 192.168.64.1 dev enp0s1 proto dhcp src 192.168.64.3 metric 100 
10.0.3.0/24 dev lxcbr0 proto kernel scope link src 10.0.3.1 linkdown 
10.4.135.0/24 dev lxdbr0 proto kernel scope link src 10.4.135.1 
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 
172.18.0.0/16 dev br-df1c5081bc7d proto kernel scope link src 172.18.0.1 linkdown 
172.19.0.0/16 dev br-251cc27d7151 proto kernel scope link src 172.19.0.1 linkdown 
192.168.64.0/24 dev enp0s1 proto kernel scope link src 192.168.64.3 metric 100 
192.168.64.1 dev enp0s1 proto dhcp scope link src 192.168.64.3 metric 100

My VM's routing table:

Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         _gateway        0.0.0.0         UG    100    0        0 enp0s1
10.0.3.0        0.0.0.0         255.255.255.0   U     0      0        0 lxcbr0
10.4.135.0      0.0.0.0         255.255.255.0   U     0      0        0 lxdbr0
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
172.18.0.0      0.0.0.0         255.255.0.0     U     0      0        0 br-df1c5081bc7d
172.19.0.0      0.0.0.0         255.255.0.0     U     0      0        0 br-251cc27d7151
192.168.64.0    0.0.0.0         255.255.255.0   U     100    0        0 enp0s1
_gateway        0.0.0.0         255.255.255.255 UH    100    0        0 enp0s1

Listening lxdbr0 from the VM, output:

sudo tcpdump -ni lxdbr0 icmp

tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on lxdbr0, link-type EN10MB (Ethernet), snapshot length 262144 bytes
23:01:50.099598 IP 10.4.135.180 > 8.8.8.8: ICMP echo request, id 188, seq 68, length 64
23:01:51.126634 IP 10.4.135.180 > 8.8.8.8: ICMP echo request, id 188, seq 69, length 64
23:01:52.147930 IP 10.4.135.180 > 8.8.8.8: ICMP echo request, id 188, seq 70, length 64
23:01:53.174693 IP 10.4.135.180 > 8.8.8.8: ICMP echo request, id 188, seq 71, length 64
23:01:54.195179 IP 10.4.135.180 > 8.8.8.8: ICMP echo request, id 188, seq 72, length 64
23:01:55.219536 IP 10.4.135.180 > 8.8.8.8: ICMP echo request, id 188, seq 73, length 64
23:01:56.242768 IP 10.4.135.180 > 8.8.8.8: ICMP echo request, id 188, seq 74, length 64
23:01:57.267456 IP 10.4.135.180 > 8.8.8.8: ICMP echo request, id 188, seq 75, length 64
23:01:58.295056 IP 10.4.135.180 > 8.8.8.8: ICMP echo request, id 188, seq 76, length 64

And I have done the following command on my VM

iptables -A FORWARD -p all -i lxdbr0 -j ACCEPT

or I have tried disabling my docker
Unfortunately, still not worked

iptables output:

iptables -t nat -L -n -v --line-numbers

Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
num   pkts bytes target     prot opt in     out     source               destination         
1       78 11520 DOCKER     all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
num   pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
num   pkts bytes target     prot opt in     out     source               destination         
1        5  1196 DOCKER     all  --  *      *       0.0.0.0/0           !127.0.0.0/8          ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
num   pkts bytes target     prot opt in     out     source               destination         
1        0     0 MASQUERADE  all  --  *      !docker0  172.17.0.0/16        0.0.0.0/0           
2        0     0 MASQUERADE  all  --  *      !br-df1c5081bc7d  172.18.0.0/16        0.0.0.0/0           
3        0     0 MASQUERADE  all  --  *      !br-251cc27d7151  172.19.0.0/16        0.0.0.0/0           

Chain DOCKER (2 references)
num   pkts bytes target     prot opt in     out     source               destination         
1        0     0 RETURN     all  --  docker0 *       0.0.0.0/0            0.0.0.0/0           
2        0     0 RETURN     all  --  br-df1c5081bc7d *       0.0.0.0/0            0.0.0.0/0           
3        0     0 RETURN     all  --  br-251cc27d7151 *       0.0.0.0/0            0.0.0.0/0           

lxdbr0 network info:

lxc network show lxdbr0

config:
  ipv4.address: 10.4.135.1/24
  ipv4.firewall: "true"
  ipv4.nat: "true"
  ipv6.address: fd42:e368:9853:dcf7::1/64
  ipv6.nat: "true"
description: ""
name: lxdbr0
type: bridge
used_by:
- /1.0/instances/ubuntu
- /1.0/profiles/default
managed: true
status: Created
locations:
- none
4
  • Can you show the output of iptables -t nat -L -n -v --line-numbers please? It's possible that your LXD network and bridges are not properly doing what is called "NAT MASQUERADE" and therefore are not able to get to the Internet via the host system. That's normally handled automatically by LXD but sometimes is not if you use a custom ruleset. Also show the output of lxc network show lxdbr0.
    – Thomas Ward
    Commented Jan 20 at 4:31
  • Sure, I have edited on the bottom
    – Harris
    Commented Jan 20 at 5:29
  • What is the output on your host of sysctl net.bridge.bridge-nf-call-arptables net.bridge.bridge-nf-call-ip6tables net.bridge.bridge-nf-call-iptables? If this gives errors or returns 0s, that's good. If you see 1s then you're the victim of Docker's assumed hegemony over the machine's network bridges, and the solution is rather involved (if you want to keep Docker on the system).
    – zwets
    Commented Jan 20 at 21:55
  • With 'host' in the previous comment I meant the VM, not the Mac.
    – zwets
    Commented Jan 20 at 22:09

2 Answers 2

1

Try adding the following to your ruleset:

iptables -t nat -A POSTROUTING -i ldxbr0 -j MASQUERADE

NORMALLY this is done by LXD automatically, but it's possible with Docker and other bridges on your system that these NAT rules got removed. (Which is why my system which has multiple VM networks, LXD bridges, etc. are all manually added to the NAT tables on my system, but that's just me being a power user)


For those who want interpretation of the above command's arguments:

  • -t nat - Use the NAT tables instead of the filter/rules tables.
  • -A POSTROUTING - This is the outgoing traffic for NAT and such, with regards to LXD bridges, etc. masquerading as your system.
  • -i lxdbr0 - Only do this when the incoming traffic source interface is lxdbr0.
  • -j MASQUERADE - Automatically NAT as the outgoing interface, which to the Internet should be your device with the Internet link.
0

I have also encountered this problem, which is due to some reasons causing issues with the forwarding of iptable requests on the host. Simply adding the forwarding outbound rule of lxdbr0 cannot solve this problem. After my attempts, the following solution ultimately helped me solve the problem:

iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE 
iptables -A FORWARD -i lxdbr0 -o eth0 -j ACCEPT 
iptables -A FORWARD -i eth0 -o lxdbr0 -m state --state RELATED,ESTABLISHED -j ACCEPT

where, "lxdbr0" is the virtual interface connected to the LXC container, "eth0" is the network port on the host. These three commands allow and manage the traffic and reverse traffic from the LXC container to the external network.

  • iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE : this rule adds a MASQUERADE operation on the POSTROUTING chain of the NAT table, which applies to all outbound packets passing through the eth0 interface. The MASQUERADE operation will change the source IP address to the IP address of the eth0 interface, which is usually used to hide the address of the internal network.
  • iptables -A FORWARD -i lxdbr0 -o eth0 -j ACCEPT : this rule allows all packets from the lxdbr0 interface to be forwarded to the eth0 interface.
  • iptables -A FORWARD -i eth0 -o lxdbr0 -m state --state RELATED,ESTABLISHED -j ACCEPT : This rule allows packet forwarding from the eth0 interface to the lxdbr0 interface, but only for those packets associated with established or related connections. This ensures that only replies (or related packets) to connections initiated internally can enter the internal network.

For those host machine is managed using nftables, the rules for adding are as follows:

nft add rule ip nat POSTROUTING oifname "eth0" masquerade
nft add rule ip filter FORWARD iifname "lxdbr0" oifname "eth0" accept
nft add rule ip filter FORWARD iifname "eth0" oifname "lxdbr0" ct state related,established accept

The final nft list ruleset of my host like that:

for table ip nat: enter image description here

for table ip filter: enter image description here

p.s. the network port on my host is named "eno1".

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .