Page 3 of 8 FirstFirst 12345 ... LastLast
Results 21 to 30 of 78
  1. #21
    Untangle Ninja YeOldeStonecat's Avatar
    Join Date
    Aug 2007
    Posts
    1,565

    Default

    Quote Originally Posted by hescominsoon View Post
    you're being a drama queen. Most of us here have real time...years of experience with real hardware nics(intel/ broadcom) and soft nics(realtek, linksys, dlink, marvel)...they simply don't hold up with UT or any other business environment that's driving any real traffic(10 megabits or higher). In lesser environments yeah they'll work..for a while. I've had issues with soft nics in my Astaro installations and Astaro isn't nearly as memory heavy as the UVm in UT is. I go hardware nics or go home now. If somebody wants to use softnics..i'll warn them..then gladly charge them full price for the consulation fee AND the replacement hardware nic once the softnic barfs.
    But he has 7 posts..clearly he's had much more experience than those of us that have been using and supporting UT out in the production world since its early years.

  2. #22
    Untangle Ninja YeOldeStonecat's Avatar
    Join Date
    Aug 2007
    Posts
    1,565

    Default

    Quote Originally Posted by ixeous View Post
    I personally don't remember a significant number of people saying that x86 servers can't compete with big iron for mission critical tasks. What does the server market look like today? .
    And the relevance of above statement for the topic of this thread is _______?

  3. #23
    Untangle Ninja sky-knight's Avatar
    Join Date
    Apr 2008
    Location
    Phoenix, AZ
    Posts
    26,554

    Default

    And there is one last critical flaw with that specific argument...

    There is a massive economic reason to continue testing those less expensive interfaces. The day they become stable, is the day I launch a new less expensive server series to capitalize on it. Such a cost reduction is a legitimate competitive edge to those of us that earn a living selling servers to operate Untangle.
    Rob Sandling, BS:SWE, MCP
    NexgenAppliances.com
    Phone: 866-794-8879 x201
    Email: support@nexgenappliances.com

  4. #24
    Untangle Ninja
    Join Date
    Jan 2009
    Posts
    1,186

    Default

    n/a
    Last edited by fasttech; 05-18-2011 at 11:21 AM.

  5. #25
    Untangle Junkie dmorris's Avatar
    Join Date
    Nov 2006
    Location
    San Carlos, CA
    Posts
    17,486

    Default

    Easy guys. Lets not form a lynch mob.

    ixeous is reporting his personal findings (and the process by which one can replicate his findings).
    Data is good.



    As an aside, this does reflect my personal experience. The "mystery" around NICs on this forum has gone on forever, with all sorts of wild theories floating around about why people have different experiences. It used to be everyone hated 3com 3c905b's. Of course we always quietly laughed at this because half of our developer machines used these and they worked great. Now everyone hates the realteks. In my experience that haven't been an issue.

    I'm not saying there haven't been performance differences observed by people.
    I'm saying I've seen these performance differences be attribute to the most bizarre things - and that isn't necessarily good.
    1) As I said before. There is no such thing as a "soft nic" and a "hard nic." These aren't RAID controllers. All NICs are a combination of hardware and software.
    2) The amount of memory used by a NIC device is TINY (compared to the amount of memory on an Untangle server.) Futhermore, the amount of memory used by a NIC device is not correlated with what is done at the application layer - there are many layers of abstraction going on between the two.

    Again, I'm not saying there aren't important differences in performance in NICs. I'm saying we must be very careful what we attribute these performance differences to. If you don't know - just say you observed differences in your setup. If you really want to take it to the next level then provide a replicable test that others can observe (which is exactly what ixeous did in this thread).
    Attention: Support and help on the Untangle Forums is provided by volunteers and community members like yourself.
    If you need Untangle support please call or email support@untangle.com

  6. #26
    Untanglit ixeous's Avatar
    Join Date
    Sep 2007
    Posts
    24

    Default More numbers

    I'm breaking this next set of testing up over multiple entries just so no one entry is excessively long.

    Using the same hardware from the first post in this thread, I used Untangle as the gateway instead of Vyatta. During these tests, Attack Blocker was turned off, and tests were run without any gateway for reference.

    Untangle Version: 8.1.1~svn20110217r28510release8.1-1lenny (32-bit)

    Test: iperf -c server -t 300 -P 5

    No UT
    [SUM] 0.0-300.0 sec 32.6 GBytes 934 Mbits/sec

    UT bypass
    RealTek: [SUM] 0.0-300.0 sec 18.8 GBytes 538 Mbits/sec
    Intel: [SUM] 0.0-300.0 sec 19.2 GBytes 549 Mbits/sec

    UT no bypass
    RealTek: [SUM] 0.0-300.0 sec 8.41 GBytes 241 Mbits/sec
    Intel: [SUM] 0.0-300.0 sec 9.20 GBytes 263 Mbits/sec

    Test: iperf -c server -t 300 -P 5 -p 80

    No UT
    [SUM] 0.0-300.0 sec 32.6 GBytes 935 Mbits/sec

    UT bypass
    RealTek: [SUM] 0.0-300.0 sec 18.8 GBytes 540 Mbits/sec
    Intel: [SUM] 0.0-300.0 sec 19.2 GBytes 549 Mbits/sec

    UT no bypass
    RealTek: [SUM] 0.0-300.1 sec 4.31 GBytes 123 Mbits/sec
    Intel: [SUM] 0.0-300.0 sec 4.59 GBytes 131 Mbits/sec

    Again, there is a consistent difference between the throughput of Intel and Realtek, but it is relatively small.
    Last edited by ixeous; 05-18-2011 at 12:38 PM.

  7. #27
    Untanglit ixeous's Avatar
    Join Date
    Sep 2007
    Posts
    24

    Default Packets per second

    As has been pointed out within this thread, iperf does not necessarily correlate to real world use very well. The fundamental reason for this (that nobody has even alluded to) is that routers/gateways do not process bandwidth (throughput), they process packets. Throughput is a function of packets per second multiplied by the (average) size of those packets. Iperf maxes out the size to get raw throughput. In a real network with varying application protocols, the size of packets vary greatly, therefore, the number of packets the system can handle per unit time may also be of interest.

    The methodology is essentially the same as method 1 described at http://blog.famzah.net/2009/11/24/be...etwork-device/. I did however, limit the ping count to 100000 for all runs and used the time command to use for the calculations. Obviously, a ping would not trigger many of the Untangle modules so we would expect the bypass and non-bypass numbers to be consistent.

    time ping -q -s 1 -c 100000 -f server

    No UT
    100000 packets transmitted, 100000 packets received, 0% packet loss

    real 0m16.489s
    user 0m0.805s
    sys 0m4.372s


    UT bypass
    RealTek:
    100000 packets transmitted, 100000 packets received, 0% packet loss

    real 0m29.525s
    user 0m0.898s
    sys 0m4.950s

    Intel:
    100000 packets transmitted, 100000 packets received, 0% packet loss

    real 0m29.966s
    user 0m0.943s
    sys 0m5.193s


    UT no bypass
    RealTek:
    100000 packets transmitted, 100000 packets received, 0% packet loss

    real 0m29.466s
    user 0m0.912s
    sys 0m5.077s

    Intel:
    100000 packets transmitted, 100000 packets received, 0% packet loss

    real 0m29.488s
    user 0m0.902s
    sys 0m4.970s


    Summary:

    No UT: 12129.3 pps

    UT bypass:
    Realtek: 6773.92
    Intel: 6674.23

    UT no bypass:
    Realtek: 6787.48
    Intel: 6782.42


    As we can see, Untangles packet per second performance is not really affected by the choice of NIC.
    Last edited by ixeous; 05-18-2011 at 12:51 PM.

  8. #28
    Untangle Ninja
    Join Date
    Jan 2009
    Posts
    1,186

    Default

    na/

  9. #29
    Untanglit ixeous's Avatar
    Join Date
    Sep 2007
    Posts
    24

    Default Packets Per Second Revisited

    In examining the data from the first attempt at measuring packets per second, the data simply didn't make sense. I knew from iperf what my data throughput was at near ideal load, so by performing a little math, I should have a reasonable idea of packets per second. The measured numbers were way off. Turns out, the methodology used was not very good.

    The second method will use the interface statistics on a 3Com 2928-SFP switch. The Untangle/client is connected to a single port. The statistics will be reset to 0 for each run of the iperf tests. After each run completes, we will have the iperf results, plus the reported packet statistics on the switch port for that run.

    The data:

    iperf -c server -t 300 -P 5

    No UT
    [SUM] 0.0-300.0 sec 32.6 GBytes 933 Mbits/sec 25783918

    UT bypass
    RealTek: [SUM] 0.0-300.0 sec 19.1 GBytes 548 Mbits/sec 15555831
    Intel: [SUM] 0.0-300.0 sec 19.2 GBytes 550 Mbits/sec 15482790

    UT no bypass
    RealTek: [SUM] 0.0-300.0 sec 8.51 GBytes 244 Mbits/sec 6694121
    Intel: [SUM] 0.0-300.0 sec 8.92 GBytes 255 Mbits/sec 7016655

    iperf -c server -t 300 -P 5 -p 80

    UT bypass
    RealTek: [SUM] 0.0-300.0 sec 19.0 GBytes 544 Mbits/sec 15445192
    Intel: [SUM] 0.0-300.0 sec 19.1 GBytes 546 Mbits/sec 15325464

    UT no bypass
    RealTek: [SUM] 0.0-300.0 sec 4.40 GBytes 126 Mbits/sec 3461951
    Intel: [SUM] 0.0-300.0 sec 4.52 GBytes 129 Mbits/sec 3552710


    Checking the numbers

    Using the No UT datapoint as a check:

    [SUM] 0.0-300.0 sec 32.6 GBytes 933 Mbits/sec 25783918

    933 Mb/s was pushed in 25783918 packets over 300 s

    933 Mb/s * 1000000 b/Mb (network calculation is not 2^20) = 933,000,000 b/s

    933,000,000 b/s / 8 (b/B) = 11662500 B/s

    11662500 B/s / 1500 (size of Ethernet frame) = 77750 p/s (expected value)

    25783918 packets / 300 s = 85946.39 p/s (measured value)

    The numbers are reasonably close. If we use 1357 instead of 1500 for the ethernet frame, we get an expected value of 85943.26.


    The measured packets per second the module stack tested are:

    iperf -c server -t 300 -P 5

    No UT
    85946.39

    UT bypass
    RealTek: 51852.77
    Intel: 51609.3

    UT no bypass
    RealTek: 22313.74
    Intel: 23388.85

    iperf -c server -t 300 -P 5 -p 80

    UT bypass
    RealTek: 51483.97
    Intel: 51084.88

    UT no bypass
    RealTek: 11539.75
    Intel: 11842.37


    Summary:

    There is not a significant difference between the Realtek and Intel NIC on packets per second. The data seems consistent
    Last edited by ixeous; 05-19-2011 at 08:56 AM.

  10. #30
    Untanglit ixeous's Avatar
    Join Date
    Sep 2007
    Posts
    24

    Default Results

    Through the various tests, there was no significant performance difference between the Realtek and Intel NICs. Furthermore, one can easily use the results to determine an expected performance for the tested module stack. We know that iperf will use a close to ideal packet size whereas a production network does not. We can, however, use average packet sizes along with measured packet throughput to determine an expected average throughput.

    Example:

    Using the Realtek data with no bypass and -p 80 option, the measured packets per second was 11539.75. We can create the below chart

    Avg Pkt Size-------------Expected Throughput
    1357----------------------125.28 Mb/s
    1300----------------------120.01
    1200----------------------110.78
    1100 ----------------------101.55
    1000----------------------92.32
    900------------------------83.09
    800------------------------73.85
    700------------------------64.62
    600------------------------55.39
    500------------------------46.16
    400------------------------36.93
    300------------------------27.7
    200------------------------18.46
    100------------------------9.23

Page 3 of 8 FirstFirst 12345 ... LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  

SEO by vBSEO 3.6.0 PL2