请输入您要查询的百科知识:

 

词条 Infineta Systems
释义

  1. Company

  2. Products

  3. See also

  4. References

  5. External links

{{Infobox company
|logo = Infineta logo.png
|logo_size = 180px
|name = Infineta Systems
|type = Private
|foundation = California 2008
|fate = shutdown and sold IP to Riverbed Technology, Inc.
|location_city = Santa Clara, California
|industry = Networking hardware
}}

Infineta Systems was a company that made WAN optimization products for high performance, latency-sensitive network applications. The company advertised that it allowed application data rate to exceed the nominal data rate of the link. Infineta Systems ceased operations by February 2013, a liquidator was appointed, and its products will no longer be manufactured, sold or distributed.

Riverbed Technology purchased some of Infineta's assets from the liquidator.[1]

Company

Infineta was founded in 2008 by Raj Kanaya, the CEO, and K.V.S. Ramarao, the CTO. Ramarao concluded the computational resources, especially I/O operations and CPU cycles, associated with data compression technologies would ultimately limit their scalability.[2] He and Kanaya determined founded Infineta to develop algorithms and hardware. The company had six patents pending.

Infineta was headquartered in San Jose, California and attracted $30 million in two rounds of venture funding from Alloy Ventures, North Bridge Venture Partners, and Rembrandt Venture Partners.[3][4]

Products

Infineta announced its Data Mobility Switch in June 2011. The DMS was the first WAN optimization technology to work at throughput rates of 10 Gbit/s.[5] Infineta designed the product in FPGA hardware around a multi-Gigabit switch fabric to minimize latency.

The DMS used compression similar to data deduplication.

The product was designed to addresses the long-standing issue of TCP performance[6] on long fat networks, so even unreduced data can achieve throughputs equivalent to the WAN bandwidth. To illustrate what this means, take the example of transferring a 2.5 GBytes (20 billion bits) file from New York to Chicago (15 ms latency, 30 ms round-trip time ) over a 1 Gbit/s link. With standard TCP, which uses a 64 KB window size, the file transfer would take about 20 minutes. The theoretical maximum throughput is 1 Gbit/s, or about 20 seconds. The DMS performs the transfer in 19.5 to 21 seconds.[7]

See also

  • Data migration
  • WAN optimization
  • Network latency
  • Network congestion

References

1. ^{{cite web |title= Technology Migration - Contact Sales Infineta |work= Web page |publisher= Riverbed Technology |url= http://www.riverbed.com/how-to-buy/technology-migration/Contact-Sales-Infineta.html |accessdate= June 27, 2013 }}
2. ^{{cite journal|last=Martynov|first=Maxim|title=Challenges for High-Speed Protocol-Independent Redundancy Eliminating Systems |journal= Proceedings of 18th International Conference on Computer Communications and Networks, ICCCN 2009. |date=11 September 2009|page=6|doi=10.1109/ICCCN.2009.5235389|url=http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5235389|issn=1095-2055}}
3. ^{{cite web|url=http://www.siliconvalleywire.com/svw/2011/06/san-jose-based-infineta-systems-raises-15-million-in-second-round.html |title=San Jose-Based Infineta Systems Raises $15 Million in Second Round |publisher=Silicon Valley Wire |date=2011-06-06 |accessdate=2011-07-29}}
4. ^{{cite web|url=http://gigaom.com/cloud/infineta-raises-15m-to-move-big-data-across-data-centers/ |title=Infineta raises $15M to move big data across data centers — Cloud Computing News |publisher=Gigaom.com |date=2011-06-06 |accessdate=2011-07-29}}
5. ^{{cite web|last=Rath|first=John|title=Infineta Ships 10Gbps Data Mobility Switch|url=http://www.datacenterknowledge.com/archives/2011/06/07/infineta-ships-10gbps-data-mobility-switch/|accessdate=June 7, 2011|date=2011-06-07}}
6. ^{{cite web|last=Jacobson|first=Van|title=TCP Extensions for High Performance|url=http://www.ietf.org/rfc/rfc1323.txt|work=Network Working Group V. Jacobson Request for Comments: 1323|publisher=ietf.org}}
7. ^Throughput can be calculated as: where RWIN is the TCP Receive Window and RTT is the latency to and from the target. The default TCP window size in the absence of window scaling is 65,536 bytes, or 524,228 bits. So for this example, Throughput = 524,228 bits / 0.03 seconds = 17,476,267 bits/second or about 17.5 Mbit/s. Divide the bits to be transferred by the rate of transfer: 20,000,000,000 bits / 17,476,267 = 1,176.5 seconds, or 19.6 minutes.

External links

  • {{cite web |title= Infineta Systems—WAN Optimization for Big Traffic |work= Company web site |url= http://www.infineta.com/ |deadurl=yes |archiveurl= https://web.archive.org/web/20121223052330/http://www.infineta.com/ |archivedate= December 23, 2012 |accessdate= June 27, 2013 }}

4 : WAN optimization|Computer storage companies|Computer companies of the United States|Defunct networking companies

随便看

 

开放百科全书收录14589846条英语、德语、日语等多语种百科知识,基本涵盖了大多数领域的百科知识,是一部内容自由、开放的电子版国际百科全书。

 

Copyright © 2023 OENC.NET All Rights Reserved
京ICP备2021023879号 更新时间:2024/9/26 0:30:27