[ovs-discuss] UDP datagram size effect at flow table hitting

병욱이 nimdrak at gmail.com
Thu Sep 12 02:21:33 UTC 2019


I appreciate your comment. Yes. You are exactly right.

After seeing your advice, I found actually the flow rule 2 is hit a
rare occurrence when using a large datagram.

As you said, maybe it is because of the first fragmented from the
large datagram.

But is there a solution to use a big datagram while the flow table hit
occur properly?

For my experiment, I should use a big datagram. Otherwise, the CPU
utilization goes to 100%.
(I think many socket write and buffer from a small datagram make the
CPU util. high)

Could you give me a little advice?

Sincerely,
ByoungUkLee.




2019년 9월 12일 (목) 오전 1:42, Justin Pettit <jpettit at ovn.org>님이 작성:
>
>
> > On Sep 11, 2019, at 7:04 AM, 병욱이 <nimdrak at gmail.com> wrote:
> >
> > I did a small experiment with ONOS 3.0.5, OVS 2.0.2(OF 1.0), mininet 2.3.0d5
> >
> > I found when making flow rule about L4 port, hitting flow table
> > doesn't work properly.
> >
> > For example, about UDP flow, ip_src=10.0.0.3, ip_dst=10.0.0.2, udp_dst=50000
> >
> >       1)  cookie=0x4c0000ef7faa8a, duration=332.717s, table=0, n_packets=8974,
> >        n_bytes=557090858, idle_age=153, priority=65050,ip,nw_dst=10.0.0.2
> >        actions=output:4
> >
> >        2) cookie=0x4c0000951b3b33, duration=332.636s, table=0, n_packets=10,
> >        n_bytes=460,idle_age=168,priority=65111,udp,nw_src=10.0.0.3,nw_dst=10.0.0.2,
> >        tp_dst=50000 actions=output:3
> >
> > Although the flow rule 2 have higher priority and more match field,
> > the flow rule 2 was hit.
> >
> > When doing trouble shooting, I found the UDP datagram size affect the result.
> >
> > For 63kBytes datagram, the flow rule 1 is hit.
> >
> > However for 1500Bytes datagram, the flow rule 2 is hit.
> >
> > I think the datagram size degrade the match of OVS but I don't know exactly
>
> 63KB is pretty big for a datagram.  Is it getting fragmented?  If so, only the first fragment contains the UDP information.
>
> --Justin
>
>


More information about the discuss mailing list