This series started as good idea and it is, but the series is bad planned. So I will shorten this a bit. So to finish this I will just write all the posts and hope it will come out best way I can.
There has been some time since I wrote the last part. Why you my think? Here is a list of why:
- 22 months old baby, parent leave etc. I don’t need any gym membership.
- Another thing is PPL-A, Private Pilot license, 850 pages of theoretical goodies I need to learn.
- Building IoT and smart home solution from scratch, soldering, coding etc.
- Work, and new job.
- and som other stuff.
Sadly I have the same amount of time that everybody else has.
Little update about the project, we are now live on 2110 network and working “well”. I will explain more.
The Building blocks.
From network point of view there was two “challenges”, the design layout (Spine/leaf topology or monolithic) and vendor. So to tackle the second one I want to know the first one, the layout.
There are advantage and disadvantages on both, Monolithic with one big switch, in some cases chassis with line cards can work, but it would not scale right in our design thoughts. Another advantage is that is non-blocking architecture. Don’t have issue with uplinks. Disadvantages is that chassis are expensive and when you have filled it up then you need to by a larger one or if you don’t get a larger one, then I need to get one more.
To scale and a design that suits our needs we choose spine/leaf architecture. We have to deal with a blocking architecture and ensure that we always have enough uplinks and this would probably cost more for know.
The next thing was how are we gonna ensure that we have enough uplinks and ensure that traffic would be some how equally shard among the links. We looked at some solutions for this. There was Nevion VideoIPath, this is a SDN solution that I was not comfortable with and it does not support 2110, only 2022. Next we looked at was Cisco NBM, that one I liked, only disadvantage was DCNM, to slow for know. There was some other SDN solutions as well, but they did not appeal to me. We went back to the “simple” one, ECMP. More on that later.
Next BIG thing is PTP, here is not time endless but in sync. All streams needs to get the same time source. This is for example important to put video and sound together. Spoiler, we use Meinbergh as PTP source. To use PTP in a LAN is not an issue, to get it over a WAN and handle the delay is…. to put it nice, hell! More on PTP WAN later. The second thing that need to be aware of is to get switches that supports PTP.
For vendor part, there was two choice, Arista and Cisco. Why those two? Good question! My Dutch colleague have tried to do this on Juniper and did not get it to work. They tried for along time and have to give up, they went for Arista. There are Mellonox switches, FS switches etc. I wanted a vendor that I know I could trust, that have implemented similarly network before. Then I narrow it down to two vendor.
I did a comparisons between them, hade one-to-one talk to ask a lot of questions etc.
- Have NBM
- Own ASIC design
- Known good TAC
- TAC supports third party vendor of SFP
- Good price (Was a little bit cheaper than the Arista)
- RTP support (Oh yeah)
- Full support to get telemetry streaming out via OpenConfig.
- DCNM (To slow)
- Can look to more Broadcast implementations than Cisco.
- Good OS (EOS)
- Really good SFP diagnostic
- Not that good TAC. Difficult to get support if running third part SFP.
- Use Broadcom ASIC.
- To get all telemetry streams you need to use their expensive software.
- No RTP support.
The are more, but this was just a quick overview. I will write a more depth article about the battle between Arista and Cisco for Broadcast and Media industry.
What we went for? Oh yes, Arista. Why? Again good question, the short answer, because I hade to. I would not elaborate this more for now, maybe later.
Now I hade this pieces in the puzzle.
Sorry for any misspellings.