Die wunderbare Welt auf Schicht 2



Ähnliche Dokumente
Innovative Technologien im Data Center

Netzvirtualisierung am SCC

Virtualisierung mit iscsi und NFS

Next Generation Data Center für SAP

1.2 QoS-Verbesserungen

Data Center Switching 2016 Brocade

JONATHAN JONA WISLER WHD.global

Das Netzentwicklungskonzept der WWU Münster insbesondere Data Center Netzwerk

Virtualisierung Phase II: Die Verschmelzung von Server & Netzwerk

Storage Area Networks im Enterprise Bereich

ZyXEL Ethernet Switch Security Switching in Layer 2+ und Layer 3+

TomTom WEBFLEET Tachograph

Latency Scenarios of Bridged Networks

umfasst: 802.1Qbb, 802.1Qau und 802.1Qaz; für Näheres sei auf DCB in [3] verwiesen.

Datacenter in einem Schrank

vsphere vs. HyperV ein Vergleich aus Sicht eines VMware Partners interface:systems

ACI Application Centric Infrastucture

Virtual Datacenter und. Connectivity für flexible IT-Lösungen

Titelmasterformat durch Klicken bearbeiten

Routing in WSN Exercise

Exercise (Part II) Anastasia Mochalova, Lehrstuhl für ABWL und Wirtschaftsinformatik, Kath. Universität Eichstätt-Ingolstadt 1

Neues in Hyper-V Version 2

IT-Effizienzworkshop bei New Vision GmbH Entry und Midrange Disksysteme

The Storage Hypervisor

CISCO Switching Lösungen für High-Speed-Netzwerke

Palo Alto Networks Innovative vs. Tradition. Matthias Canisius Country Manager DACH

Chapter 8 Ethernet-Switching. CCNA 1 version 3.0 Wolfgang Riggert,, FH Flensburg auf der Grundlage von

NEWSLETTER. FileDirector Version 2.5 Novelties. Filing system designer. Filing system in WinClient

German English Firmware translation for T-Sinus 154 Access Point

DATACENTER SWITCHING Network Competence Day magellan netzwerke GmbH

Tube Analyzer LogViewer 2.3

Unternehmen-IT sicher in der Public Cloud

Seite Virtual LAN (VLAN) 5.1 Einleitung

Implementing a Software- Defined DataCenter MOC 20745

Private VLAN's! Vom Providernetz zum Schulnetzwerk!

Word-CRM-Upload-Button. User manual

ALLNET ALL-SG8452M 48 Port 10/100/1000Mbps + 4SFP Smart Managed Ethernet Switch FANLESS

H Mcast Future Internet made in Hamburg?

AS Path-Prepending in the Internet And Its Impact on Routing Decisions

WISSENSWERTES ÜBER WINDOWS SCALE-OUT FILE SERVER

Distributed FCF. Distributed FCoE Forwarder. Distributed FCF. Autor: Prof. Dr.-Ing. Anatol Badach

Cost Reduction with IP Storage

FCoE (Fibre Channel over Ethernet) Eine Lösung für konvergente Datencenter

Lüfterloser 24 Port Managed Gigabit Layer-2 Switch

Granite Gerhard Pirkl

Zusammenfassung Programme, Promotionen, Bundles

Mock Exam Behavioral Finance

Infrastructure as a Service (IaaS) Solutions for Online Game Service Provision

RZ Netzwerk-Infrastruktur Redesign

Lüfterloser 8 Port Managed Gigabit Layer-2 Switch

Donato Quaresima Matthias Hirsch

p^db=`oj===pìééçêíáåñçêã~íáçå=

Komplexe Netzwerke mit Oracle VM Server SPARC

Bosch Rexroth - The Drive & Control Company

Telekommunikationsnetze 2

Exercise (Part XI) Anastasia Mochalova, Lehrstuhl für ABWL und Wirtschaftsinformatik, Kath. Universität Eichstätt-Ingolstadt 1

ALLNET ALL-SG8245PM / managed 5 Port Gigabit Switch, 4x PoE AT 30W per Port

Kurzanleitung um Transponder mit einem scemtec TT Reader und der Software UniDemo zu lesen

MobiDM-App Handbuch für Windows Mobile

Citrix Networking-Lösungen. Mehr Tempo und Ausfallsicherheit mit physischen und virtuellen Appliances

DPM_flowcharts.doc Page F-1 of 9 Rüdiger Siol :28

Data Center Switching 2016 Cisco Systems

EtherNet/IP Topology and Engineering MPx06/07/08VRS

Kurzpräsentation. CMS IT-Consulting GmbH

Metro Ethernet. Seminar Communication Systems Universität Zürich. Stefan Weibel Maik Lustenberger Roman Wieser

Vasco Tonack Network and Communication, ZEDAT. Cisco UC Licensing. Und die Nachteile von Extension Mobility

Praktische Erfahrungen mit Hochgeschwindigkeitsfirewalls. 54. DFN-Betriebstagung

Communications & Networking Accessories

Grundkurs Computernetzwerke

UPU / CEN / ETSI. E-Zustellung in Europa & weltweit

Ethernet Switching und VLAN s mit Cisco. Markus Keil IBH Prof. Dr. Horn GmbH Gostritzer Str Dresden info@ibh.

Live Streaming => Netzwerk ( Streaming Server )

Switch 1 intern verbunden mit onboard NICs, Switch 2 mit Erweiterungs-NICs der Server 1..6

Switching. Übung 7 Spanning Tree. 7.1 Szenario

3E03: Layer 2 Redundanz im LAN mit Spanning Tree

Intelligent Data Center Networking. Frankfurt, 17. September 2014

IT Systeme / Netzwerke (SAN, LAN, VoIP, Video) D-LINK DXS-3350SR

SimpliVity. Hyper Converged Infrastruktur. we do IT better

JUNIPER DATACENTER-LÖSUNGEN NUR FÜR DATACENTER-BETREIBER?

Gateway-Lösungen für die Anbindung ans Wissenschaftsnetz X-WiN, ein Update

VMware vsphere: What's New [V5.5 to V6]

Wireless & Management

ANNEX A - PROTOCOL IMPLEMENTATION CONFORMANCE STATEMENT (NORMATIVE)

Skalierbare Rechenzentren mit Cisco FabricPath

Veeam Availability Suite. Thomas Bartz System Engineer, Veeam Software

Produktinformation Access-Gateway. Product information Access gateway AGW 670-0

Product Lifecycle Manager

Aufbau eines IT-Servicekataloges am Fallbeispiel einer Schweizer Bank

SOFTWARE DEFINED NETWORKS Network Competence Day magellan netzwerke GmbH

Virtual PBX and SMS-Server

H o c h s c h u l e D e g g e n d o r f H o c h s c h u l e f ü r a n g e w a n d t e W i s s e n s c h a f t e n

Network-Aligned content delivery through collaborative optimization

DCB. Data Center Bridging DCB. Autor: Prof. Dr.-Ing. Anatol Badach

Lehrstuhl für Allgemeine BWL Strategisches und Internationales Management Prof. Dr. Mike Geppert Carl-Zeiß-Str Jena

Konfigurationsbeispiel ZyWALL USG

Karlsruhe Institute of Technology Die Kooperation von Forschungszentrum Karlsruhe GmbH und Universität Karlsruhe (TH)

The process runs automatically and the user is guided through it. Data acquisition and the evaluation are done automatically.

Video Line Array Highest Resolution CCTV

Keysight Technologies Using InfiniiMax Probes with Test Equipment other than Infiniium Oscilloscopes

Transkript:

ALQ-30-AUG-10 Version 0.004 Die wunderbare Welt auf Schicht 2 GUUG Hamburg Dipl.-Ing. A. la Quiante TSA, Public Sector Operation, Germany alaquian@cisco.com # 7817

ALQ-30-AUG-10 Agenda TOI := Transfer of Information Erklärung der Technik (Abkürzungen und Begriffe) Grundlage, Voraussetzung und Motivation Unified I/O und Unified Fabric DCE, CEE und DCB und die Standards Quo Vadis Layer 2 VSS, vpc, FabricPath und TRILL ca. 60-90 Minuten

ALQ-26-NOV-09 Motivation I MAC WWN Firmware Server Anwendung Speicher Verbindung Fat-Client Ist-Stand (vielfach) Dezentrale Systeme 100M/1G NIC + 2xHBA

ALQ-26-NOV-09 Motivation II NIC 1 für LAN (VLAN 42 := Produktion) NIC 2 für LAN (VLAN 42 := Produktion) HBA 1 für SAN (VSAN 33 := Produktion) HBA 2 für SAN (VSAN 33 := Produktion) Managementzugang VLAN VSAN := virtual LAN := virtual SAN Realer Server, der nächste Schritt: Virtualisierung

ALQ-26-NOV-09 Motivation III PCI-X PCIe 1Gb 1Gb 1Gb 1Gb 100Mb 1Gb 1Gb 1Gb 1Gb 1Gb NIC 1 für LAN (VLAN 42 := Produktion) NIC 2 für LAN (VLAN 42 := Produktion) HBA 1 für SAN (VSAN 33 := Produktion) HBA 2 für SAN (VSAN 33 := Produktion) Managementzugang NIC 3 für VMkernel NIC 4 für VMkernel NIC 5 für Service Console NIC 6 für Service Console NIC 7,... für LAN (VLAN 43... n) Bei einem Verhältnis von hier 8:1 und ehemals 550 Mbit/s pro Server benötigen wir 4.4 Gbit/s für die Uplinks der VM-Uplinks (Produktion) Eine vmnic pro VM?

ALQ-19-OCT-09 The Case for 10GE to the Server Multi-Core CPU architectures allowing bigger and multiple workloads on the same machine Server virtualization driving the need for more I/O bandwidth per server Growing need for network storage driving the demand for higher network bandwidth to the server 10GE LAN on server Motherboards (LoM) beginning mid-2008 (source: Broadcom )

ALQ-26-NOV-09 10/40/100 Gigabit Ethernet Prognose vom Frühjahr 2007

ALQ-28-FEB-10 Marketing und Standards DCE := CEE := DCB := Data Center Ethernet Converged Enhanced Ethernet Data Center Bridging

ALQ-30-AUG-10 DCB Data Center Bridging Feature / Standard Priority Flow Control (PFC) IEEE 802.1Qbb Bandwidth Management IEEE 802.1Qaz Congestion Management IEEE 802.1Qau Data Center Bridging Exchange Protocol (DCBX) L2 Multipath for Unicast and Multicast Benefit Enable multiple traffic types to share a common Ethernet link without interfering with each other Enable consistent management of QoS at the network level by providing consistent scheduling End-to-end congestion management for L2 network Management protocol for enhanced Ethernet capabilities Increase bandwidth, multiple active paths. No spanning tree (TRILL IETF, L2MP Cisco) FCoE IEEE 802.3Qbh T11 FC-BB-05 Standard since 03-JUN-09 (VN-TAG)

Adapter NIC HBA CNA VIC

ALQ-26-NOV-09 Unified I/O CNA CNA := Converged Network Adapter CNA := NIC + HBA

ALQ-03-FEB-10 Unified I/O und Unified Fabric VIC M81KR

ALQ-30-NOV-09 Cisco Virtualized Adapter Switch Today s Server Hypervisor Soft Switch Cisco VIC Hypervisor Soft Switch (opt.) Virtual Machine Virtual Machine Virtual Machine Virtual Machine Virtual Machine Virtual Machine True wire once architecture highly dynamic Network policy and visibility brought to VMs Hypervisor bypass support increases performance 9/15/2010 02/10/2009

Unified I/O

ALQ-26-NOV-09 Strategie Gedankenspiel: Weltweiter Absatz FC 100.000 Ports Ethernet 100.000.000 Ports FCoE NAS und iscsi Fibre Channel Welche Technologie wird zukünftig das bessere Preis/Leistungsverhältnis aufweisen?

Who is Who T11 Group: Fibre Channel Standards Part of INCITS Has defined FC technologies for over 10 years; FCIA markets them Focuses on all things FC: Physical Layer Switching Framing Security Fibre Channel Why it s important to FCoE: Standardized method of transporting Fibre Channel frames over Ethernet Standardized method for multi-hop FCoE

Who is Who IEEE 802.1 Working Group: LAN Bridging Standards Part of IEEE 802 (LAN and MAN committee) Defines LAN Bridging technologies E.g., all about Ethernet switching The Data Center Bridging (DCB) Task Group is inside IEEE 802.1 DCB developed bridging extensions relevant for the Data Center environment Why it s important to FCoE: Those bridging extensions enable I/O consolidation with FCoE Ethernet

When Are Standards Done? 4 Phases of Standards Development 1. Investigation Standard is technically stable, a.k.a "Done," when it moves from Development to Approval phase 2. Development 3. Approval 4. Publication

Status of the Standards All Standards for FCoE Are Technically Stable FC-BB-5 Technically stable in October, 2008 Completed in June 2009 Published in May, 2010 Inv Dev Appr Pub PFC Completed in July 2010, awaiting publication Inv Dev Appr Pub ETS Completed in July 2010 (completing Approval Phase 3) Inv Dev Appr Pub DCBX Completed in July 2010 (completing Approval Phase 3) Inv Dev Appr Pub Technically Stable 119

ALQ-30-AUG-10 Priority Flow Control (PFC) Priority Flow Control (PFC) IEEE 802.1Qbb

ALQ-26-NOV-09 Priority Flow Control (PFC) Ethernet FC Ethernet + CNA Unified Fabric Lane 1 Lane 2 FC FCoE Lane 3 FCoE FC Lane n Ethernet Lane 8 Ethernet DCB := Data Center Bridging FCoE := Fibre Channel over Ethernet

ALQ-30-AUG-10 Bandwidth Management Bandwidth Management IEEE 802.1Qaz

Enhanced Transmission Selection Bandwidth Management Once feature fcoe is configured, 2 classes are made by default By default, each class is given 50% of the available bandwidth 2 Gig FC HBAs 1 Gig Ethernet NICs Best Practice : FCoE and Ethernet each receive 50% Can be changed through QoS settings when higher demands for certain traffic exist (i.e. HPC traffic, more Ethernet NICs)

ALQ-26-NOV-09 Unified I/O Ethernet FC Unified Fabric N5k(config)# policy-map type queuing policy-fcoe N5k(config-pmap-que)# class type queuing class-default N5k(config-pmap-c-que)# bandwidth percent 40 N5k(config-pmap-c-que)# class type queuing class-fcoe N5k(config-pmap-c-que)# bandwidth percent 60 N5k(config-pmap-c-que)# system qos N5k(config-sys-qos)# service-policy type queuing input policy-fcoe input policy either on the interface the CNA is attached to or globally to the system qos

ALQ-30-AUG-10 Data Center Bridging Exchange Protocol Data Center Bridging Exchange Protocol (DCBX)

ALQ-25-MAY-10 DCBX CNA Für die Wartung LAN ON/OFF: Signalisierung via DCBX

ALQ-24-MAY-09 I/O Consolidation (NPV or FC Mode) F Port SAN NPIV Name Server Server NP Port NPV Server F Port N Port Server SRV1 NPIV := N Port ID Virtualization (Fabric Device Feature) NPV := N Port Virtualizer (Edge Device Feature)

ALQ-01-DECY-09 Unified Fabric (VE Ports) LAN SAN 10 G Ethernet 4/8 G Fibre Channel DCE/FCoE DCE/FCoE Server VN_Port VF_Port VF_Port VE_Port DCE/FCoE FCoE Storage OTV VE_Port VE_Port FEX

Schicht 2

ALQ-14-OCT-09 Kommunikationsbeziehungen Cluster, Vmotion & Co erfordern wieder Layer 2 End-to-End Order evtl. LISP VSS Catalyst 6500 vpc Nexus 5000 & 7000 Vmotion Brandabschnitt West Vmotion Brandabschnitt Ost Ehemals Silo-Architektur und Nord-Süd Kommunikation. Zunehmend zusätzlich Ost-West Kommunikation TRILL (evtl. 1H2011?) IETF WG L2MP / FabricPath (jetzt) Cisco

Constraints for Scaling Layer 2 Network Port Density on Switches Over-subscription Ratio Complex STP Configuration Data Center Core Aggregation Primary vpc HSRP Active Primary Root N VPC domain - - - - - - - R R R R R R R N Secondary vpc HSRP Standby Secondary Root - R N Network port E Edge port - Normal port type B R L BPDUguard Rootguard Loopguard Layer 3 Layer 2 X-Chassis Port-Channel Access - - - L E B E B E B E B E B MAC Table Size

Existing Options for Layer 2 Expansion More I/O slots More ports per I/O podule Higher Port-Density Wasted Bandwidth Higher Port Cost Limited Scale STP only allows single active link between 2 devices Multiple Inter-Switch Links Use interface with speed equals to the combination of multiple lower-speed links Higher Interface Speed More ports in a bundle (up to 16-port today) Port-Channel/Link-Aggregation

ALQ-19-OCT-09 VSS und vpc VSS vpc := Virtual Switching System := Virtual Port Channel (EC) NIC Teaming: Active-Passive Teaming: TLB Teaming: EC

ALQ-19-OCT-09 VSS & vpc Limitierung: zwei und genau zwei zentrale Systeme 10G 40G 100G Port 1 Port 24 Port 1 Port 48 Limitierung: 8 oder 16 Member per EtherChannel...

ALQ-19-OCT-09 Skalierungsoption: FabricPath Limitierung: zwei bis 16 zentrale Systeme 10G 40G 100G L2MP/TRILL Port 1 Port 24 Port 1 Port 48...

ALQ-14-OCT-09 Maximal skalierbare Schicht 2 Umgebungen SW1 SW2 SW3 SW16 Einzelverbindungen Oder EtherChannel Port 1 Port 24 Port 1 Port 48 Aufgabe: Mehr als 2 zentrale Systeme (Spines) aber schliefenfreiheit auf Schicht 2 ohne Spanning-Tree (STP)

Data Plane Operation Encapsulation to creates hierarchical address scheme FabricPath header is imposed by ingress switch Ingress and egress switch addresses are used to make Routing decision No MAC learning required inside the L2 Fabric FabricPath Header S42 S11 C FabricPath Routing A S11 S42 DATA S11 STP FabricPath Domain S42 Ingress Switch C A DATA A C Egress Switch L2 Bridging STP Domain 1 STP Domain 2 A C A C

Control Plane Operation Plug-N-Play L2 IS-IS is used to manage forwarding topology Assigned switch addresses to all FabricPath enabled switches automatically (no user configuration required) Compute shortest, pair-wise paths Support equal-cost paths between any FabricPath switch pairs S1 S2 S3 S4 FabricPath Routing Table Switch IF S1 L1 S2 S3 S4 L2 L3 L4 L1 L2 L3 S12 S42 L1, L2, L3, L4 L1, L2, L3, L4 S11 L4 S12 S42 L2 Fabric

Unicast with FabricPath Forwarding decision based on FabricPath Routing Table Support more than 2 active paths (up to 16) across the Fabric Increase bi-sectional bandwidth beyond port-channel High availability with N+1 path redundancy S1 S2 S3 S4 Switch S42 IF L1, L2, L3, L4 L1 L2 L3 MAC IF A 1/1 C S42 S11 L4 S12 S42 1/1 L2 Fabric

Multicast with FabricPath Forwarding through distinct Trees Several Trees are rooted in key location inside the fabric All Switches in L2 Fabric share the same view for each Tree Multicast traffic load-balanced across these Trees Root for Tree #1 Root for Tree #2 Root for Tree #3 Root for Tree #4 Ingress switch for FabricPath decides which tree to be used and add tree number in the header L2 Fabric

Use Case: L2 Internet Exchange Point Provider A Provider B IXP Requirements Layer 2 Peering enables multiple providers to peer their internet routers with one another 10GE non-blocking fabric Scale to thousands of ports Provider C Provider D FabricPath Benefits for IXP Transparent Layer 2 fabric Scalable to thousands of ports Bandwidth not limited by chassis / port-channel limitations Simple to manage, economical to build

L2MP Bridges require an additional Ingress/Egress header RBridges also require a next hop header to traverse a legacy Ethernet cloud It contains Ingress and Egress D/Rbridge switchids, i.e., the entry and exit points of the L2MP cloud This is unchanged, except FCS DBridge RBridge

Schicht 2

ALQ-14-OCT-09 Schicht 2 zwischen Rechenzentren

ALQ-03-FEB-10 Die Aufgabe Rechenzentrum 1 Verbindung Rechenzentrum 2 Die simple Aufgabe: 2 Rechenzentren und eine Verbindung Tür an Tür

ALQ-03-FEB-10 Die Aufgabe Rechenzentrum n 1 Layer 3 site to site Kopplung 2 Layer 2 site to site Kopplung 3 SAN Extension Rechenzentrum 1 Rechenzentrum 2 Die Aufgabe für Fortgeschrittene: n Rechenzentren und redundante Verbindung

ALQ-30-NOV-09 Die Möglichkeiten für DCI L2TPv3 EoMPLS L2MP / TRILL VPLS VPLSoGRE OTV 9/15/2010

ALQ-04-OKT-09 Bestandteile der Lösung Backbone / SP RP AED Edge Devices AED Single Homed AED VID=42 VID=42 VID=42 Site 1 (Kuddewörde) Site 2 (Berlin) Site 3 (München) AED := Authorized Edge Device (pro VLAN) L3 Metro Ethernet L2 (LAN) MC Tree

ALQ-05-OKT-09 OTV = MAC Bridging + MAC routing Overlay IGP (ISIS) LSPs SRV VID=42 LAN E 4/7 AED IP A L3 IP IP B AED VID=42 LAN SRV MAC: AA MAC: BB MAC IF Hardware Learning AA BB E 4/7 Overlay IP B Signalisierung via OTV (ISIS) Sites: MAC Bridging Core: MAC routing via overlaid IGP MAC table: destination ports und IP routes Overlay Interface Internal Interface

ALQ-05-OKT-09 FHRP HSRP Active 192.168.2.1 HSRP Active 192.168.2.1 VID=42 HSRP Hellos VID=42 IP: 192.168.2.100 Mask:255.255.255.0 GW: 192.168.2.1 HSRP Standby 192.168.2.1 HSRP Standby 192.168.2.1 IP: 192.168.2.100 Mask:255.255.255.0 GW: 192.168.2.1 VMotion & FT Standardverhalten: Filter der HSRP Nachrichten auf den OTV Stecken