[ovs-dev] [mcgroup 2/4] datapath: Hardcode vport multicast group ID on older kernels.

Ethan Jackson ethan at nicira.com
Fri Sep 16 00:37:05 UTC 2011


> I'm going to consider this patch reviewed.

With one minor adjustment (copied into gmail):

diff --git a/include/openvswitch/automake.mk b/include/openvswitch/automake.mk
index b7c6723..24a6826 100644
--- a/include/openvswitch/automake.mk
+++ b/include/openvswitch/automake.mk
@@ -1,5 +1,6 @@
 noinst_HEADERS += \
        include/openvswitch/brcompat-netlink.h \
+       include/openvswitch/datapath-compat.h \
        include/openvswitch/datapath-protocol.h \
        include/openvswitch/tunnel.h \
        include/openvswitch/types.h


> Ethan
>
> On Thu, Sep 15, 2011 at 17:11, Jesse Gross <jesse at nicira.com> wrote:
>> I don't really think this all that important.  Neither moving the
>> range that we allocate from nor making it discontinuous is that hard.
>> Like I said before, I don't really care that much.  33 is fine.
>>
>> On Thu, Sep 15, 2011 at 5:06 PM, Ben Pfaff <blp at nicira.com> wrote:
>>> I can't argue with that.  We could add a way to query it, I guess, if
>>> really necessary.
>>>
>>> On Thu, Sep 15, 2011 at 05:04:16PM -0700, Jesse Gross wrote:
>>>> I meant moving the group of fallback IDs would break things.
>>>>
>>>> On Thu, Sep 15, 2011 at 5:01 PM, Ben Pfaff <blp at nicira.com> wrote:
>>>> > It wouldn't break the ABI to move either pool around, because those
>>>> > aren't hardcoded in userspace, only in the kernel. ??A discontinuous
>>>> > range would also work but wouldn't be necessary.
>>>> >
>>>> > On Thu, Sep 15, 2011 at 04:55:56PM -0700, Jesse Gross wrote:
>>>> >> I guess the other thing is if we want to increase our pool of
>>>> >> preallocated multicast groups, we have to either break the ABI or make
>>>> >> the current pool discontinuous.
>>>> >>
>>>> >> On Thu, Sep 15, 2011 at 4:48 PM, Ben Pfaff <blp at nicira.com> wrote:
>>>> >> > Personally I'd suggest 33 for this one and increment for each
>>>> >> > succeeding family. ??No one's ever mentioned a problem with our use of
>>>> >> > genetlink groups. ??Since RHEL5 is probably declining rather than
>>>> >> > increasing in deployment, my guess is that no one ever will.
>>>> >> >
>>>> >> > On Thu, Sep 15, 2011 at 04:44:53PM -0700, Jesse Gross wrote:
>>>> >> >> Not really, I don't have any particular opinion on the actual number.
>>>> >> >> The only thing that I was concerned about is what it would look like
>>>> >> >> if we want to do this with the multicast groups for other families.
>>>> >> >>
>>>> >> >> On Thu, Sep 15, 2011 at 4:40 PM, Ethan Jackson <ethan at nicira.com> wrote:
>>>> >> >> > Based on my offline discussions with Jesse I arrived, rather
>>>> >> >> > arbitrarily, at the number 214. ??I don't know enough about the kernel
>>>> >> >> > to judge what a good number choice would be. ??Jesse seemed to think
>>>> >> >> > larger was better. ??I'll use whatever the two of you think is best.
>>>> >> >> >
>>>> >> >> > Ethan
>>>> >> >> >
>>>> >> >> >
>>>> >> >> > On Thu, Sep 15, 2011 at 16:31, Ben Pfaff <blp at nicira.com> wrote:
>>>> >> >> >> On Thu, Sep 15, 2011 at 04:10:55PM -0700, Ethan Jackson wrote:
>>>> >> >> >>> > Where does the number 214 come from?
>>>> >> >> >>>
>>>> >> >> >>> Experimentally I found that the number had to be fairly small. ??I
>>>> >> >> >>> wanted it to be large enough to be unlikely conflict to values the
>>>> >> >> >>> proper way. ??I also wanted a number which was arbitrary to avoid
>>>> >> >> >>> conflicting with other people who may be improperly hardcoding values
>>>> >> >> >>> like this.
>>>> >> >> >>
>>>> >> >> >> We already use genetlink groups 16 through 31 (see
>>>> >> >> >> datapath/linux/compat/genetlink-openvswitch.c) and group 32 (see
>>>> >> >> >> datapath/linux/compat/genetlink-brcompat.c). ??I don't think it makes
>>>> >> >> >> sense to skip all the way to 214. ??Even in 2.6.37 I only see a total
>>>> >> >> >> of 11 defined genetlink multicast groups, so I doubt that anyone's
>>>> >> >> >> going to backport a bunch of them to RHEL 5.
>>>> >> >> >>
>>>> >> >> >
>>>> >> >
>>>> >
>>>
>>
>



More information about the dev mailing list