wiki:Hardware/bDomains/cSandboxes/iSB9

Version 19 (modified by msherman, 5 years ago) ( diff )

SB9

SB9 is dedicated to wired networking and cloud computing experimentation. Nodes in sandbox 9 don't have wireless interfaces.

Sandbox 9 consists of 8 servers, each with the following specifications:

  • Server model: Dell R740XD
  • CPU: 2x Intel Xeon Gold 6126
  • Memory: 192GB
  • Disk: 256gb ssd
  • Network: 2x 25gbe mellanox connect-x4 lx
  • Network: 2x 100gbe mellanox connect-x5 lx (pending, shares 1x pcie3.0x16 slot)
  • Network: 1x 10gbe (pending)
  • Accelerator1: Nvidia V100 16gb
  • Accelerator2: Xilinx Alveo U200 FPGA (2x 100gbe ethernet onboard)

They are connected to the following P4 Capable Switch: Edge Core Wedge100BF-32x

  • Ports: 32x 100g ports, each can break out to 2x 50, 2x 40, 4x 25, or 4x 10g ports.
  • OS: Azure SONIC operating system, or OpenNetworkLinux
  • Management: In band, out of band, or via onboard BMC.

Procedures

Manual install of new switch os:

Old

All hardware below this line has been retired. The information is preserved for reference.

SB9 is dedicated to OpenFlow experimentation (NOTE: Nodes in SB9 don't have wireless interfaces!). As show in Figure 1, Sanbox 9 is built around an OpenFlow capable switch with 12 nodes, 7 of which are equipped with NetFPGA cards, 2 nodes with NetFPGA-10G card and 2 general purpose ORBIT nodes and a sandbox console. In addition, there is a fast cache machine, with a 10g connection and a ssd storage array fast enough to saturate that link.

No image "SB9.jpg" attached to Hardware/bDomains/cSandboxes/iSB9
No image "SB9_pic.JPG" attached to Hardware/bDomains/cSandboxes/iSB9

Figure 1: SB9

The switch labeled 'sw-sb-09', a Pronto 3290, provides the central connectivity backplane for 'DATA' interface of all hosts/NetFPGAs in the sandbox. Each host (node1-1..node1-12) is connected to the sw-sb-09 switch through one 1GbE data connection on the interface 'eth0' or 'exp0'. As is the case with the rest of ORBIT nodes, a second GbE interface ('eth1' or 'control') of each node, that is used exclusively for experiment control (incl. ssh/telnet sessions), is connected to an external control switch outside of sandbox.

The first 7 hosts (node1-1..node1-7) each contain a 4x1GbE NetFPGA installed on a PCI slot. Each NetFPGA has 4 connections to the top switch, corresponding to its 4-GbE ports nf2c0-nf2c3.

Nodes node1-8 and node1-9 each contain a 4x10GbE NetFPGA installed on a PCI Express slot. top switch while the remaining two are directly connecting the two nodes.

Node1-12 has eth4 connected to the pronto switch at 10g, with eth5 currently unconnected. It also has 2 unconnected 10gbase-T ports, enumerated as eth2 and eth3.

The Pronto switch is a OpenFlow enabled switch and can be run in native or OpenFlow mode by controlling its boot configuration. When in native mode, it runs the Pica8 Xorplus switch software while when it is in OpenFlow mode it can run either stock Indigo firmware or experimenter provided OpenFlow image for Pronto switches. The mode of operation is controlled by the network aggregate manager service.

Switchport # Node Device Interface #
1
2 1-1 NetFPGA Port4
3 1-1 NetFPGA Port3
4 1-1 NetFPGA Port2
5 1-1 Card Data
6
7 1-2 NetFPGA Port4
8 1-2 NetFPGA Port3
9
10 1-2 Card Data
11
12 1-3 NetFPGA Port4
13 1-3 NetFPGA Port3
14 1-3 NetFPGA Port2
15 1-3 Card Data
16
17 1-4 NetFPGA Port4
18 1-4 NetFPGA Port3
19 1-4 NetFPGA Port2
20 1-4 Card Data
21
22 1-5 NetFPGA Port4
23
24 1-5 NetFPGA Port2
25 1-5 Card Data
26
27 1-6 NetFPGA Port3
28 1-6 NetFPGA Port1
29 1-6 NetFPGA Port2
30 1-6 Card Data
31
32 1-7 NetFPGA Port4
33 1-7 NetFPGA Port3
34 1-7 NetFPGA Port1
35 1-7 Card Data
36 1-11 Onboard Data
37
38
39 1-10 Onboard Data
40 1-8 Card Data
41
42
43
44
45 1-9 Card Data
46
47
48
te-1/1/49 1-12 Card Data
te-1/1/50 ? NetFPGA ?
te-1/1/51 ? NetFPGA ?
te-1/1/52 ? NetFPGA ?
Eth1 Management SW-SB-02 44
Eth2 Management
Aux
Con Management S-ORBW Serial1

Attachments (1)

  • SB9 BD.png (85.2 KB ) - added by seskar 4 years ago. Sandbox 9 Block Diagram

Download all attachments as: .zip

Note: See TracWiki for help on using the wiki.