400G Over Multimode Fiber: BiDi Changes the Game

400G Over Multimode Fiber: BiDi Changes the Game

For anyone who has managed a data center fiber plant over the past decade, the arrival of 400 Gigabit Ethernet came with a painful side effect: singlemode fiber. If your DC was built in the era of 10G or 40/100G BiDi, chances are your structured cabling was entirely multimode, a perfectly reasonable choice at the time. Then, 400G arrived, and all the dominant transceiver options (DR4, FR4, LR4) required singlemode. Overnight, that existing multimode plant became a liability for anyone planning to upgrade.

That situation is now changing, thanks to a transceiver Cisco quietly added to its QSFP-DD portfolio in the last few months: the QDD-400G-BD.

Operations vs Projects Balance in IT infrastructure teams

Operations vs Projects Balance in IT infrastructure teams

For the past two years, I have been managing a team in charge of the data center infrastructure. It is a medium-sized team in charge of the HPC clusters and bare-metal servers, storage, network, and also virtualization and containers platforms, such as VMWare, OpenStack, and Kubernetes, plus some central services.
This team has a dual mandate: keep daily operations running smoothly and carry out innovative and challenging engineering projects. As the person responsible for resource management and keeping the team’s project schedule on track, one important challenge is to find the right balance between these two responsibilities. This requires a strategic approach to resource allocation and time management.  Let me give you my take on operations vs projects balance in IT infrastructure teams.

Cisco Live 2024 Wrap-Up – Cisco Nexus HyperFabric, Disaggregated Scheduled Fabric (DSF) and SONiC

Cisco Live 2024 Wrap-Up – Cisco Nexus HyperFabric, Disaggregated Scheduled Fabric (DSF) and SONiC

Last week, I attended my 11th Cisco Live in person, in the fabulous Las Vegas. This post is my Cisco Live 2024 Wrap-up.

I can already tell you that the next edition of Cisco Live US will be held June 8-12, 2025 in San Diego, California. If my company agrees to send me there, I’m already looking forward to it, because San Diego is a wonderful city.

The Future of Network Engineering in the AI/ML era

The Future of Network Engineering in the AI/ML era

It seems like yesterday when I saw my first network automation presentation at a conference. I remember it very well; it was in 2015 at the Cisco Network Innovation Summit in Prague. Mr. Tim Szigeti was presenting the first version of the Cisco APIC-EM, the future Cisco Digital Network Architecture (DNA) controller. I talked already about it in a previous article, written in 2018, about my journey toward network programmability and automation.

After its presentation, and for many years afterward, the question was on everyone’s lips:

Migrating Cisco FabricPath and Classic Ethernet Environments to VXLAN BGP/EVPN over a 400Gb-based Clos Topology, part 1 – the why

Migrating Cisco FabricPath and Classic Ethernet Environments to VXLAN BGP/EVPN over a 400Gb-based Clos Topology, part 1 – the why

During the past three years, I have spent a good portion of my time testing, planning, designing, and then migrating our DC network from Cisco FabricPath and Classic Ethernet environments to VXLAN BGP/EVPN. And simultaneously, from a hierarchical classic two-tier architecture to a more modern Clos 400Gb-based topology.

The migration is not yet 100% completed, but it is well underway. And I have gained significant experience on the subject, so I think it’s time to share my knowledge and experiments with our community.