请输入您要查询的百科知识:

 

词条 Parsytec
释义

  1. History

  2. Products/Computers

     Megaframe  Multicluster  SuperCluster  GigaCluster  x'plorer   Cognitive Computer   Powermouse   Operating system  

  3. See also

  4. References

  5. External links

{{Infobox dot-com company
| name = Parsytec
| logo =
| company_type = Public
| foundation = 1985
| location_city = Aachen, NRW
| location_country = Germany
| area_served = North America, South America, Europe, Asia Pacific
| founder = Falk-Dietrich Kübler, Gerhard Peise, Bernd Wolf
| services = Surface inspection systems
| url = http://www.parsytec.de
| language = German
}}

ISRA VISION PARSYTEC AG is a company of ISRA VISION AG and was founded in 1985 as Parsytec (PARallel SYstem TEChnology) in Aachen, Germany.

Parsytec has become known in the late 1980s and early 1990s as a manufacturer of transputer-based parallel systems. Products ranged from a single transputer plug-in board for the IBM PC up to large massively-parallel systems with thousands of transputers (or processors, respectively) such as the Parsytec GC. Some sources call the latter ultracomputer sized, scalable multicomputers (smC).[1][2]

As part of the ISRA VISION AG, today the company focusses on solutions in the machine vision and industrial image procession sector. The ISRA Parsytec products are used for quality and surface inspection especially in the metal and paper industries.

History

In 1985, Parsytec was founded by Falk-Dietrich Kübler, Gerhard H. Peise, and Bernd Wolff in Aachen, Germany, with an 800000 DM grant from Federal Ministry for Research and Technology (BMFT).[3]

In contrast to SUPRENUM, Parsytec directly aimed their systems (pattern recognition) at industrial applications such as surface inspection. Therefore, they not only had a substantial market share in the European academia but they could also win many industrial customers. This included many customers outside Germany. In 1988, export accounted for roughly a third of Parsytec's turnover.

Turnover figures were: nil in 1985, 1.5M DM in 1986, 5.2M DM in 1988, 9M DM in 1989, and 15M DM in 1990, 17M USD in 1991.

In order to focus Parsytec on research and development, ParaCom was founded. ParaCom thence took care of the sales and marketing side of the business.

Parsytec/ParaCom's headquarters were maintained in Aachen (Germany), however they had subsidiary sales offices in Chemnitz (Germany), Southampton (United Kingdom), Chicago (USA), St Petersburg (Russia) and Moscow (Russia).[4] In Japan, the machines were sold by Matsushita.[3]

Between 1988 and 1994, Parsytec built quite an impressive range of transputer based computers having its peak in the "Parsytec GC" (GigaCluster) which was available in versions using 64 up to 16384 transputers.[5]

Parsytec had its IPO in mid-1999 at the German Stock Exchange in Frankfurt.

On Apr, 30, 2006 founder Falk-D. Kübler left the company.[6]

In July 2007,[7] 52.6%[8] of the Parsytec AG were acquired by ISRA VISION AG. The delisting of Parsytec shares from the stock market started December the same year. And since 18 April 2008, the Parsytec share is no longer listed on the stock exchange.[9]

Whilst workforce at Parsytec was roundabout 130 staff in the early 1990s, the ISRA VISION Group had more than 500 employees in 2012/2013.[10]

Today, the core business of ISRA Parsytec within the ISRA VISION Group is the development and distribution of surface inspection systems for strip products in the metal and paper industries.

Products/Computers

Parsytec's product range included:

  • Megaframe (T414/T800) --- one per board, up to ten boards in a rack or as plug-in boards
  • MultiCluster (T800) --- up to 64 processors in a single rack
  • SuperCluster (T800) --- 16 to 1024 processors in a single frame
  • GigaCluster (planned: T9000; realized: T800 or MPC 601) --- 64 to 16384 processors in "cubes"
  • x'plorer (T800 or MPC 601)
  • Cognitive Computer (MPC 604 and Intel Pentium Pro)
  • Powermouse (MPC 604)

In total, some 700 stand-alone systems (SC and GC) had been shipped.

In the beginning, Parsytec had participated in the GPMIMD (General Purpose MIMD)[11] project under the umbrella of the ESPRIT[12] project, both being funded by the European Commission's Directorate for Science.

However, after substantial divisions with the other participants, Meiko, Parsys, Inmos and Telmat, as regards the choice of a common physical architecture, Parsytec left the project and announced a T9000-based machine of their own, i.e. the GC. But due to Inmos' problems with the T9000, they were forced to change to the ensemble Motorola MPC 601 CPUs and Inmos T805. This led to Parsytec's "hybrid" systems (e.g. GC/PP) degrading transputers to communication processors whilst the compute work was offloaded to the PowerPCs.

Parsytec's cluster systems were operated by an external workstation, typically a SUN workstation (e.g. Sun-4[13]).

There is a substantial confusion as regards the names of the Parsytec products.

On the one hand this has to do with the architecture, but on the other hand it had to do with the aforementioned non-availability of Inmos T9000 that forced Parsytec to use the T805 and the PowerPC instead. Systems that were equipped with PowerPC processors had the prefix "Power".

For what concerns the architecture of GC systems, an entire GigaCluster is made up of self-contained GigaCubes.

The basic architectural element of a Parsytec system was a cluster which consisted inter alia of four transputers/processors (i.e. a cluster is a node in the classical sense).

A GigaCube (sometimes referred to as supernode/meganode[14]) consisted of four clusters (nodes) with 16 Inmos T805 transputers (30 MHz), RAM (up to 4 MB per T805), plus a further redundant T805 (an additional, thus the 17th processor), the local link connections and four Inmos C004 routing chips. Hardware fault tolerance was provided by linking each of the T805 to a different C004.[15]

The unusual spelling of x'plorer led to xPlorer and the Gigacluster is sometimes referred to as the Gigacube or Grand Challenge.

Megaframe

Megaframe[16][17] was the product name of a family of transputer based parallel processing modules.[18]

Some of which could be used to upgrade an IBM PC.[19]

As a standalone system, a Megaframe system could hold up to ten processor modules. Different versions of the modules were available, for example, one with a 32-bit transputer T414, floating-point hardware Motorola 68881, 1 MB (80 nanosecond access time) of RAM and a throughput of 10 MIPS, or one with four 16-bit transputers T22x with 64 kB of RAM.

Also cards for special features were on offer, such as a graphics processor with a resolution of 1280 x 1024 pixels or I/O-"cluster" with terminal and SCSI interface.[20]

Multicluster

MultiCluster-1 series were statically configurable systems and could be tailored to specific user requirements such as number of processors, amount of memory, and I/0 configuration, as weil as system topology. The required processor topology could be configured by using UniLink connection; fed through the special back plane. In addition, four external sockets were provided.

Multicluster-2 used network configuration units (NCUs) that provided flexible, dynamically configurable interconnection networks. The multiuser envirorunent could support up to eight users by using Parsytec's multiple virtual architecture software.

The NCU design was based on the Inmos crossbar switch, the C004, which gives full crossbar connectivity for up to 16 transputers. Each NCU, made of C004s, connected up to 96 UniLinks that link internal as weil as external transputers and other I/0 subsystems.

MultiCluster-2 provided the ability to configure a variety of fixed interconnection topologies such as tree or mesh structures.

[14]

SuperCluster

SuperCluster [https://www.flickr.com/photos/53343802@N00/301038080/ (picture)] had a hierarchical, cluster-based design. A basic unit was a 16-transputer T800, fully connected cluster; larger systems had additional levels of NCUs to form necessary connections. The Network Configuration Manager (NCM) software controlled the NCUs and dynamically established the required Connections. Each transputer could be equipped with 1 to 32 MB of dynamicic RAM

with single-error correction and double-error detection.[14]

GigaCluster

The GigaCluster (GC) was a parallel computer which was produced in the early 1990s. A Giga Cluster was made up of Giga Cubes.[24]

Being designed for the Inmos T9000-transputers, it could never be launched as such, since the Inmos T9000 transputers itself never made it to the market in good time.

This led to the development of the GC/PP (PowerPlus) in which two Motorola MPC 601 (80 MHz) were used as the dedicated CPUs supported by four transputers T805 (30 MHz)[21]

Whilst the GC/PP was a hybrid system, the GCel ("entry level") was based on T805 only.[22][23] The GCel was supposed to be upgradeable to the T9000 transputers (had they come early enough), thus becoming a full GC. As the T9000 was Inmos' evolutionary successor of the T800, upgrading was planned to be simple and straightforward because, firstly, both transputers shared the same instruction set and, secondly, they also had quite a similar performance ratio of compute power versus communication throughput. Therefore, a theoretical a speed-up factor of 10 was expected [24] but in the end it was never reached.

The network structure of the GC was a two-dimensional lattice with an inter-communication speed between the nodes (i.e. clusters in Parsytec's lingo) of 20 Mbit/s.

For the time, the concept of the GC was exceptionally modular and thus scalable.

A so-called GigaCube was a module that was already a one gigaflop system; furthermore, it was the building block for greater systems.

A module (i.e. cube in Parsytec's lingo) contained

  • four clusters

of which each was equipped with

  • 16 transputers (plus a further transputer for redundancy, thus making it 17 transputers per cluster),
  • 4 wormhole routing chips (C104 for the planned T9000 and C004 with the realized T805),
  • a dedicated power supply and communications ports.

By combining modules (or cubes, respectively,) one could theoretically connect up to 16384 processors to a very powerful system together.

Typical installations were:

SystemNumber of CPUsNumber of GigaCubes
GC-1641
GC-22564
GC-3102416
GC-4409648
GC-516384256

The two largest installations of the GC, which were actually shipped, had 1024 processors (16 modules, with 64 transputers per module) and were operated at the data centers of the Universities of Cologne and Paderborn.

In October 2004, the latter had been given to the Heinz Nixdorf Museums Forum[25] where it is inoperable now.

The power consumption of a system with 1024 processors was approximately 27 kW, the weight was almost a ton. In 1992, the system priced about 1.5M DM. While the smaller versions up to GC-3 were air-cooled, water cooling was mandatory for the larger systems.

In 1992, a GC with 1024 processors reached a placement in the TOP500 list[26] of the world's fastest supercomputer installations. In Germany alone, it was number 22 of the fastest computers.

In 1995, there were nine Parsytec computers in the Top500 list of which two GC/PP 192 installations ranked 117 and 188 in the TOP500 list.[27]

And in 1996, they still ranked 230 and 231 in the TOP500 list.[28][29]

x'plorer

The x'plorer model came in two versions: The initial version was featuring 16 transputers, each having access to 4MB RAM and called just x'plorer. Later when Parsytec generally switched to the PPC architecture, it was called POWERx'plorer and featured 8 MPC 601 CPUs. Both models came in the same gorgeous desktop case (designed by Via 4 Design[30]).

In any model, the x'plorer was more or less a single "slice" — Parsytec called them cluster (picture) — of a GigaCube (PPC or Transputer), which used 4 of those clusters in its smallest version (GC-1). Thus, some call it a "GC-0.25".[31]

The POWERx'plorer was based on 8 processing units arranged in a 2D mesh. Each processing unit had

  1. one 80 MHz MPC 601 processor,
  2. 8 MB of local memory and
  3. a transputer for establishing and maintaining communication links.&91;32&93;

Cognitive Computer

The Parsytec CC (Cognitive Computer) (picture) system[33][34] was an autonomous unit at the card rack level.

The CC card rack subsystem provided the system with its infrastructure including power supply and cooling. The system could be configured as a standard 19 rack mountable unit which accepted the various 6U plug-in modules.

The CC system[35] was a distributed memory, message passing parallel computer and is globally classified into the MIMD category of parallel

computers.

There were two different versions available

  • CCe: based on Motorola MPC 604 processor running at 133 MHz with 512 KB L2-cache. The modules were connected together at 1 Gbit/s with high speed (HS) link technology according to the IEEE 1355 standard, allowing data transfer at up to 75 MB/s. The communication controller was integrated in the processor nodes through the PCI bus. The system board used the MPC 105 chip to provide memory control, DRAM refresh and memory decoding for banks of DRAM and/or Flash. The [CPU] bus speed is limited to 66 MHz while the PCI bus speed was 33 MHz at maximum.
  • CCi: based on Intel Pentium Pro its core elements were dual Pentium Pro-based motherboards (at 266 MHz) which were interconnected using several high speed networks. Each dual motherboard had 128 Mbyte of memory. Each node had a peak performance of 200 MFLOPS. The product spectrum comprised single-processor or SMP-boards up to a 144 node system, a large variety of PCI cards and also different communication solutions (Gigabit HS-Link, Myrinet, SCI, ATM or Fast-Ethernet). The operating systems was Windows NT 4.0 and ParsyFRame (UNIX environment was optional).[36]

In all CC-systems, the nodes were directly connected to the same router which implemented an active hardware 8 by 8 crossbar switch for up to 8 connections using the 40 MBytes/s high-speed link.

For what concerns the CCe, the software was based on IBM's AIX 4.1 UNIX operating system together with Parsytec's parallel programming environment Embedded PARIX (EPX).[37]

Thus, it combined a standard UNIX environment (compilers, tools, libraries) with an advanced software programming development environment. The system was integrated to the local area network using standard Ethernet. Therefore, a CC node had a peak performance of 266 MFlops. The peak performance of the 8-node CC system installed at Geneva University Hospital was therefore 2.1 GFlops.[38]

Powermouse

Powermouse was another scalable system that consisted of modules and individual components. It was a straightforward extension of the x'plorer-system.[36] Each module (dimensions: 9 cm x 21 cm x 45 cm) contained four MPC 604 processors (200/300 MHz) and 64 MB RAM attaining a peak performance of 2.4 Gflop/s.

A separate communication processor T425) equipped with 4 MB RAM,[39] controlled the data flow in four directions to other modules in the system. The bandwidth of a single node was 9 MB/s

For about 35000 DM a basic system consisting of 16 CPUs (i.e. four modules) could provide a total computing power of 9.6 Gflop/s. As was with all Parsytec products, Powermouse required a Sun Sparcstation as the front-end.

All software (PARIX with C++ and Fortran 77 compilers and debuggers (alternatively providing MPI or PVM as user interfaces) was included.[40]

Operating system

The operating system used was PARIX (PARallel UnIX extensions)[41] (PARIXT8 for the T80x transputers and PARIXT9 for the T9000 transputers, respectively). Based on UNIX, PARIX[42] supported remote procedure calls, it was compliant with the POSIX standard.

PARIX provided UNIX functionality at the front-end (e.g. a Sun SPARCstation which had to be purchased separately) with library extensions for the needs of the parallel system at the backend which was precisely the Parsytec product itself (it was connected to the front-end by which it was operated). The Parix software package comprised components for the program development environment (compilers, tools, etc.) and runtime environment (libraries). PARIX offered different types of synchronous and asynchronous communication.

In addition, Parsytec provided a parallel programming environment called Embedded PARIX (EPX).[37]

To develop parallel applications using EPX, data streams and function tasks were allocated to a network of nodes. The data handling between

processors required just a few system calls.

Standard routines for synchronous communication such as send and receive were available as well as asynchronous system calls. The full set of EPX calls established the EPX application programming interface (API). The destination for any message transfer was defined through a virtual channel that ended at any user defined process. Virtual channels were user defined and managed by EPX.

The actual message delivery system software utilised the router.[38]

Moreover, one could also run COSY (Concurrent Operating SYstem)[43] and Helios on the machines. Helios supported the special reset-mechanism of Parsytec out of the box.

See also

  • INMOS
  • SUPRENUM
  • Meiko Scientific
  • Thinking Machines Corporation

References

1. ^Massively Parallel Computers: Why Not Parallel Computers for the Masses? G. Bell at microsoft.com
2. ^Alternating-Direction Line-Relaxation Methods on Multicomputers J. Hofhaus et al., SIAM J. ScI. COMPUT. Vol. 17, No. 2, pp. 454-478, March 1996
3. ^Duell der Zahlenfresser at zeit.de (German)
4. ^Parsytec GmbH at new-npac.org
5. ^Parsytec Article at GeekDot.com
6. ^Annual Statement of Accounts 2006 at parsytec.de
7. ^ISRA Vision übernimmt Parsytec Jul 23, 2007 at finanznachrichten.de (German)
8. ^ISRA VISION AG - Erwerb der Mehrheit an der Parsytec AG Jul 24, 2007 at equinet-ag.de (German)
9. ^Investor Relations at ISRA at parsytec.de
10. ^Annual Report 2012/2013 May 05, 2014 at isravision.com
11. ^General-Purpose MIMD Machines at cordis.europa.eu
12. ^European programme (EEC) for research and development in information technologies (ESPRIT), 1984-1988
13. ^[https://research.microsoft.com/en-us/people/efp/pap95thesis.pdf A Framework for Characterising Parallel Systems for Performance Evaluation] Efstathios Papaefstathiou (1995)
14. ^[https://web.archive.org/web/20130408131211/http://www.dtic.mil/cgi-bin/GetTRDoc?Location=U2&doc=GetTRDoc.pdf&AD=ADA285782. ESN Information Bulletin 92-08] at dtic.mil
15. ^Hypecube Solutions for Conjugate Directions, J. E. Hartman (1991) at dtic.mil
16. ^MEGAFRAME TPM-1 - Hardware Documentation Ver. 1.2 (1987) at classiccmp.org
17. ^MEGAFRAME MTM-2 - Hardware Documentation Ver. 1.3 (1987) at classiccmp.org
18. ^MEGAFRAME Familie {{webarchive|url=https://archive.is/20130210175256/http://www.computerwoche.de/heftarchiv/1987/22/1159921/ |date=2013-02-10 }} May 1987 at computerwoche.de (German)
19. ^Ram Meenakshisundaram's Transputer Home Page at classiccmp.org
20. ^Transputersystem ist frei konfigurierbar {{webarchive|url=https://web.archive.org/web/20080402050400/http://www.computerwoche.de/heftarchiv/1987/11/1158717/ |date=2008-04-02 }} Mar 1987 at computerwoche.de (German)
21. ^The Parsytec Power Plus at netlib.org
22. ^Programmierung und Anwendung der Parallelrechnersysteme Parsytec SC und Parsytec GC/PP{{dead link|date=March 2018 |bot=InternetArchiveBot |fix-attempted=yes }} B. Heiming, 1996, Technical University Hamburg-Harburg (German)
23. ^Synthesizing massive parallel simulation systems to estimate switching activity in finite state machines{{dead link|date=March 2018 |bot=InternetArchiveBot |fix-attempted=yes }} W. Bachmann et al., Darmstadt University of Technology
24. ^Gigacube Article at GeekDot.com
25. ^Homepage of the Heinz Nixdorf Museum Forum
26. ^http://www.top500.org TOP500 List
27. ^[https://web.archive.org/web/20101225071300/http://top500.org/static/lists/xml/TOP500_199506_all.xml Top500 List 1996]
28. ^Lecture Notes on Applied Parallel Computing {{webarchive|url=https://web.archive.org/web/20100816142309/http://ocw.mit.edu/courses/mathematics/18-337j-applied-parallel-computing-sma-5505-spring-2005/lecture-notes/chapter_6.pdf |date=2010-08-16 }} at ocw.mit.edu
29. ^Viel hilft viel: Die neuen Supercomputer haben Billigprozessoren wie der PC nebenan - aber zu Tausenden at zeit.de (German)
30. ^iF Online Exhibition - Via 4 Design at ifdesign.de
31. ^x'plorer Article at GeekDot.com
32. ^Experimental Study on Time and Space Sharing on the PowerXplorer S. Bani-Ahmad, Ubiquitous Computing and Communication Journal Vol. 3 No. 1
33. ^Parsytec CC Series (Hardware.Documentation), Rev. 1.2 (1996) Parsytec GmbH
34. ^The Parsytec CC series at netlib.org
35. ^Target Machines {{webarchive|url=https://web.archive.org/web/20040812211926/http://parallel.di.uoa.gr/BENCHMARKS/MACHINES/index.html |date=2004-08-12 }} at http://parallel.di.uoa.gr
36. ^Parallel Computing Hardware at ssd.sscc.ru
37. ^[https://web.archive.org/web/20030509124439/http://www.csa.ru/CSA/tutor/parsa/epx.ps Embedded Parix Ver. 1.9.2, Software Documentation (1996)]
38. ^Implementation of an Environment for Monte Carlo simulation of Fully 3-D Positron Tomography on a High-Performance Parallel Platform H. Zaidi, Parallel Computing, Vol. 24 (1998), pp. 1523-1536
39. ^System Parsytec Power Mouse in CSA {{webarchive|url=https://archive.is/20130416172819/http://www.csa.ru/education/lib/Parsytec/Parsytec/powermouse.html |date=2013-04-16 }} Dec 15, 1998 at csa.ru
40. ^Parsytec liefert Baukasten für Parallelrechner {{webarchive|url=https://archive.is/20130210223202/http://www.computerwoche.de/heftarchiv/1997/38/1101338/ |date=2013-02-10 }} Sep 19, 1997 at comuterwoche.de (German)
41. ^[https://docs.google.com/viewer?a=v&pid=sites&srcid=ZGVmYXVsdGRvbWFpbnxwYXJzeXRlY3RyYW5zcHV0ZXJ8Z3g6NmY4YTYxZGI0OGRlNmNjNQ PARIX Release 1.2 Software Documentation] March 1993
42. ^Parix at http://www.informatik.uni-osnabrueck.de
43. ^COSY – ein Betriebssystem für hochparallele Computer R. Butenuth at uni-paderborn.de (German)

External links

  • Homepage of ISRA VISION PARSYTEC AG
  • Ram Meenakshisundaram's Transputer Home Page at classiccmp.org
  • 16384 Prozessoren bringen 400 Gflops Transputer-Superrechner von Parsytec als neuer Weltmeister{{dead link|date=March 2018 |bot=InternetArchiveBot |fix-attempted=yes }} Article at computerwoche.de (German)
  • [https://archive.is/20130210230827/http://www.computerwoche.de/heftarchiv/1993/40/1130412/ Zur Strategie von Parsytec Kuebler: "In zehn Jahren rechnen die meisten Computer parallel"] Oct 1, 1993, at computerwoche.de (German)
  • The FTMPS-Project: Design and Implementation of Fault-tolerance Techniques for Massively Parallel Systems{{dead link|date=March 2018 |bot=InternetArchiveBot |fix-attempted=yes }} J. Vounckx et al.
  • [https://web.archive.org/web/20131212002306/http://www.via4.com/html_version/english/ Homepage of Via 4 Design]
{{DEFAULTSORT:Parsytec}}

3 : Supercomputers|Massively parallel computers|Parallel computing

随便看

 

开放百科全书收录14589846条英语、德语、日语等多语种百科知识,基本涵盖了大多数领域的百科知识,是一部内容自由、开放的电子版国际百科全书。

 

Copyright © 2023 OENC.NET All Rights Reserved
京ICP备2021023879号 更新时间:2024/11/12 4:40:12