A LEGACY RESUME / CV


Noel Milton Vega

http://www.vitae.wiki

nmvega@vitae.wiki

 

Big Data Analytics & Platform Architect (@PRISMALYTICS, LLC.)

 


 

FORMAL, CONTINUING, & SELF EDUCATION:

·        MS Electrical & Computer Engineering, Boston University, Boston MA

·        BS Electrical Engineering, Rensselaer Polytechnic Institute (RPI), Troy, NY

·        2009  New York University (NYU) School of Continuing Professional Studies, NYC, NY (ongoing)

 

Strengths Include

Service Oriented Business Strategist & Architect | Trend Identification | Thought Leadership

Team Direction | Cross-Functional Collaboration  |  Vendor Relationships | Enterprise Level Mgmt. Clear Written & Verbal Expression | Proven Success Across Multiple Business Verticals

 

 


BIG DATA VENDOR AND OPEN SOURCE TECHNOLOGIES & PLATFORMS USED:

·        Apache/Cloudera Hadoop: Installing as well as Writing Map/Reduce jobs using Python and the Pydoop API for Hadoop MVv1

·        Apache PIG: Writing scripts to create & run parallel Map/Reduce workflows

·        Amazon Web Services stack: EC2, Elastic Map/Reduce (EMR), S3, EBS, etc.

·        Storm (by Twitter): Spouts & Bolts and Trident API

                                        

(Note: Please see end of resume for a list of legacy/past platforms used).

 

 

PROFESSIONAL WORK EXPERIENCE:


March 2013 - Present – Bank of America / Merrill Lynch

Data & Business Information Strategist/Architect (Consultant)

 

Mahout based Recommender Engine:

Technical-level role involved the design and implementation of Big-Data infrastructure components to institute a near real-time publications Recommender Engine for clients/subscribers of its financial research publications line-of-business. Technologies include Cloudera Hadoop, Mahout to generate item-based, item-similarity, and item-cluster driven recommendations, and Apache PIG & Python for Map/Reduce jobs including pre and post processing of Hadoop jobs.


Executive-level interfacing role involved managing customer expectation and keeping the project on track.

 

Storm (by Twitter) based real-time processing of FIX messages for event monitoring & handling:

As part of an evaluation process to identify successor technologies to Bank of America's current real-time stream processing system, implemented a P.O.C. based on (Twitter's) Storm to process over a million FIX messages per second: Implemented the 1-Nimbus + 3 worker-node Storm/Zookeeper cluster (including fail-fast provisions); Designed and wrote the Topology and Spout & Bolt components (in
Java) that solved various high speed stream processing use cases, including subscribing to their trade message stream topics, performing operations on received messages as Storm tuples, re-publishing the results back to the stream as a (possibly different) topic, and on a limited basis wrote to a Cassandra database as well. The P.O.C. led to the successful funding of a project to implement this in production, in progress now.

 


Oct 2009 – March 2013 – PRISMALYTICS, LLC.

Data Strategist & Business Information Strategist

 

Strategic leadership:

Work with organizations to provide Big Data & Business Intelligence Strategies across their portfolio of products & services, typically serving as the focal leader for new initiatives.

Specific roles can include:

 

·        Specifying and documenting multi-year Data leveraging roadmaps that meet client business objectives

·        Subsequently identifying optimal technologies to enable those roadmaps

·        Providing overarching guidance to lines-of-businesses / projects that have Data Strategy / Business Intelligence implications.

·        Provide use-case specific approaches for Structured, Semi-structured, and Unstructured Big Data sets.

·        Building a multi-year data management strategy for Hardware, Software and Services adoption. Components, among numerous, may include:

 

   Apache/Cloudera Hadoop (HDFS + Map/Reduce) for batch analysis of data, as well as it’s associated stack of ecosystem components,

   such as Cloudera HIVE and PIG to enable technical staff to write Map/Reduce analysis jobs; Cloudera Flume to collect ongoing streaming data, and

   Sqoop to import/export RDBMS relational data to/from HDFS, HIVE and HBase; Storm (by Twitter) for real-time message processing.

                                                                                                                                                    

Tactical Assignments Include:

·        Writing Hadoop “Pipes” based Map/Reduce & HDFS classes/programs using Python-2 and the Pydoop API on customer supplied Hadoop platforms

·        Working with DBA-provided dataset dumps for ad-hoc analysis of them with Python programs and Map/Reduce

·        Writing scripts in Apache PIG to create & run parallel Map/Reduce workflows

·        Providing training / education on writing Pydoop-based Map/Reduce & HDFS programs for Hadoop

·        Leveraging my legacy storage background to help with data mobility strategies

·        Occasionally leveraging my technical background, asked to design & build a POC-sized Hadoop platform itself

                                                                                                                                          


Sept 2007 - Sept 2009 - Network Appliance (NetApp)

DataCenter Cloud Solution design, and Monitoring .

 

Extended NetApp’s basic enterprise monitoring platform (known as Operations Manager (OM) / Distributed Fabric Manager (DFM)), by writing code to (1) tap into notification events generated by this products and, where necessary, invoke lights out corrective action (such as just-in-time growing of volumes that reached capacity) and (2) use DFM as a gateway/portal into the NetApp ONTAP enterprise, to generate custom dashboards (in HTML or CSV format) to report the health of an entire NetApp enterprise. Also perform heterogeneous migrations for their customers using tools like SecureCopy, Beyond Compare, Rsync, etc. to NetApp filers (Storage Controllers) from competitor storage platforms.

 


Sept 2006 – Sept 2007 – The City of New York
OpenSystems Infrastructure Design Architect (Consultant)

 

Storage, SAN, and Sun/Solaris Engineer designing and implementing the infrastructure for New York City’s employee time entry and management portal. The project objective was to implement a web-based application that is used by employees of New York City’s 129+ agencies to enter in hours worked, vacation time, etc. The new infrastructure replaces numerous older mechanisms for entering and tracking such data, with a central modern system, making the process more cost effective and forecast budgeting easier. Technologies include, SunCluster 3.2; VCS 4.x; EMC Symmetrix; Clariion CX300; IBM application server products (MQueue; WebSeal; WebSphere; TIM/TAM), etc.

 


Sept 2005 – August 2006    INET (InetATS ECN)/NASDAQ, New York City, N.Y.
Senior Systems Engineer

 

INET (formerly known as Island/InetATS and now part of NASDAQ), provides the transaction network and engine (a.k.a. ECN) that implements what is universally known as the NASDAQ stock exchange (the computing engine to which financial institutions connect to trade equities/stocks). The core of this platform consists of a grid of 1U systems running a 70mb custom Linux & JRE based O/S; and which work together through a UDP broadcast-based protocol which glues them together. One of these systems is the match engine (the single computer that interprets and handles incoming requests), and the rest are systems that implement functions and customer connectivity.

 

By necessity, the INET-based platform is dynamic (say, to enhance internal back-end robustness; to expand customer facing functionality; to comply with regulatory entities; to handle ever increasing volume; to reduce latencies and increase response times etc.). Therefore it is a system that is constantly researched and augmented by the small team of engineers that work on it. My role on this team included: Instrumentation to measure/identify/improve end-to-end UDP broadcast latencies throughout the platform (including use of network taps, the sock/iperf/netperf/tcpdump/ ethereal utilities, etc.).

 

Developed programs in C to: connect to a network interface (typically ethX but not necessarily) to watch-for/sense missed packets in the sequenced UDP broadcast stream; to implement a client/server application which provided out-of-band control of servers in the core ECN network. Designed and implemented a SAN using Qlogic SB5202 FC switches, and Nstor/Xyratex 4900 series fibre channel arrays. The SAN stores (among other things) historical data concerning transactions that have occurred on the core ECN.

 

Undertook a project to design an environment capable of creating tiny embedded forms of the Solaris and Linux O/S with specific attributes such as: kernel versions; compile options; kernel subsystems to include/exclude; versions of glibc to compile against.  Although the initial version of this environment would produce kernel and ramdisk images that could be embedded in the core grid of x86 based systems (via pxe & dhcp), the design would also be extendible to specialized hardware boards based on RISC/ARM and other processors. The idea for the latter is to produce tiny java network engines capable of handling closely spaced UDP packets, and to eliminate overhead associated with generic stock x86 hardware (such as interrupts). As part of this design, I had to work on a platform that allowed hundreds of 1U servers (i.e. embedded x86 targets) to PXE/DHCP/TFTP boot tiny of custom appliance versions of Solaris or Linux over the network. Anytime a machine was rebooted, its O/S would be completely rebuilt from scratch, and is one reason the O/S’s was kept very tiny (appliance like). From start (boot) to finish (running), it took about 70 seconds to build a machine. To do this, in Linux ttylinux was used as a base, then stripping and adding files to that filesystem as needed, along with the addition of a kernel and ramdisk, created the final version; and for Solaris I used the x86.miniroot ramdisk filesystem and kernel, which I opened up (via a lofiadm mounts) and similarly customized it by stripping and adding what I needed. In both cases, the design of the O/S was such that its application function (let’s say a UDP packet re-request server) was not hard coded into it, but rather was designed so that a boot script would, based on a configuration file it would FTP over (via wget), dynamically customize the server’s personality at boot-time. It was very scalable.

Migrated the InetATS DR site from Secaucus, New Jersey to Ashburn, Virginia. This was quite a large project.

 


June 2004 – August 2005    Consultant
OpenSystems Engineer and Technical Writer for various clients

 

[Primus Financial]: Storage Engineer - retained in a consultant capacity to design and configure a Sun / Clariion / Brocade based SAN.

 

[EMC / Cablevision's Rainbow Media division]: EMC Engineer (residency program) - retained in a consultant capacity to assist their customer to re-design and provision storage in a EMC Symmetrix 8430 and ED-1032 Connectrix based SAN environment.

 

[ Sun Microsystems' Clients ]: Technical authoring of Business Continuance processes, High Availability best practice guides, and step-by-step HOW-TOs for various clients of. A small sampling of the many documents I have written over the years for clients, and for myself, can be viewed at the following online address:   http://www.vitae.wiki

 


Oct 2002 - June 2004   The New York Mercantile Exchange (NYMEX), Manhattan & L.I. (DR)
Senior SAN Design/Architect/Implementation/Admin Engineer (Consultant)

 

Retained in a consultant capacity by NYMEX to fully design and implement a multi-site SAN (Storage Area Network). Other than my own, no vendor or third party professional services were used in any of the work to be described. Please refer to the following technical diagrams and papers, which relate to the work described below:

 

            NYMEX Clusters and DWDM distance SAN and Buffer Credit Calculations for Distance SANs

 

Successfully designed/specked out/purchased/installed/configured the following: (1) Took two unused Hitachi HDS9960s, moved one to a DR site, and got TrueCopy working between the two via direct FC-AL over two DWDM lines.  This step included upgrading HDS microcode, provisioning disks via the SVP and Remote Console, creating an RCU, ensuring different DWDM paths etc. (2) Based on the distance between the two sites, calculated the BB & EE CREDIT requirements and procured and configured 4 McData ES3232 switches (a redundant pair for each site); through a pair of local ISLs between the two switches at the same site, and pairs of DWDM ISLs between the two sites, converted FC traffic from just TrueCopy traffic over DWDM to general FibreChannel SAN traffic over DWDM (which included TrueCopy traffic) (3) Purchased Hitachi SANtinel licenses for Lun Masking (4) as the first clients of the SAN, created a mission critical VCS 3.5MP2 cluster (for a mission Critical Oracle Database) on at one site, and synchronously TrueCopy replicated the data to the DR site. Other accomplishments included creating specialized High Availability storage solutions using multi-initiator configurations using the D2 StorEdge array; configured Squid Proxy servers; and other Sun/Solaris architectures/implementations.

 


Jan 2001 - Sep 2002     Cablevision Inc., Hicksville, Long Island
Lead Senior Sun/EMC/Veritas/Solaris Infrastructure Design Engineer (Consultant)

 

Retained in a consultant capacity by Cablevision core Engineering to design and implement all aspects of the compute and storage processing infrastructure (Sun, EMC, Veritas, etc.) that represents the backbone of what will serve a projected 4 million households with a new home media service known as the iO.tv (i.e. Interactive Optimum, which is based on a digital set top box that uses http to display a scalable and remotely updatable user interface, and a cable modem to send encrypted digital video on demand, email, and other plug-in interactive services). Among other things, I designed and implemented (without EMC professional services) the EMC based SAN that services all the functions in the Master Head End datacenter (customer database, the digital movie encoders, the Conditional access smart card encryption system, set top box auto provisioning system etc). Technologies include ED1032 Connectix; 8730/8430 Symmetrixes; Fibre Zone; Volume Logix; ECC 4.3; SRDF; TimeFinder; Persistent Binding; PowerPath. Designed and built highly available HA clusters based on Veritas VCS and various other availability technologies and best practices (PowerPath/JNI/Qlogic, DMP, IP multipathing, mirroring, load balancing through Resonate etc); Designed and coded a black box engine that sits on top of Sun's jumpstart product that eliminates all of the traditional shortcomings of jumpstart and makes it enterprise scalable. I evaluated and installed the new SunFire 6800, 4800, 3800 products; the evaluation included creating single and dual partitions and the domains within them using the System Controller Console and Domain Console CLI; physically adding and removing components like CPU/Memory, Interface Boards, PCI cards in one domain to verify the isolation in other domains; flash upgraded the 6800 & 3800 firmware; extensive load testing including designing and creating a platform for Oracle 8.1.7/9i Parallel Server testing on EMC storage. Essentially, I designed and physically implemented (always with an enterprise and scalable mentality) anything to do with the back end (Head End) computing infrastructures; it will ultimately service up to several million households in the New York Tri-State area.       

 


Aug 1999 - Dec 2000     Mail.COM Inc., New York, N.Y.

Lead Senior Sun/EMC/Veritas/Solaris Infrastructure Design Engineer

 

Engineered, built and administered the EMC-based Storage Area Network (SAN) on which Mail.COM hosted its critical businesses: consumer, partner ISP, and business-to-business messaging (for example: Mail.com, Email.com, Iwon.com, CNN.com etc.). The design consisted of four EMC Symmetrix 3930s, two 3830s, three EMC dual director Connectrix Fibre Channel Switches, four EMC Celerra File Servers, and a dozen 28CPU Sun UEx500 Enterprise Servers. Additional components include software components such as EMC PowerPath, ECFM Connectrix Manager, Volume Logix, ECC SymCLI & OSM Manager, Veritas Volume Manager (VxVM), Veritas FileSystem (VxFS), etc.

 

On the SAN side of the design, considerations such as redundancy and resiliency were made possible through multiple active (Power) Paths across different Fibre Channel switches, JNI  HBA boards across different Sun I/O boards, VxFS for quick filesystem recovery, and VxVM mirrored O/S and SWAP volumes. High availability considerations were accomplished through the configuration of an additional UE6500 with visibility to the entire SAN storage pool via Fibre Channel zoning and Volume Logix HBA to FA hypervolume authentication. In this configuration, the UE6500 can assume the compute task of any of the other N UExx00 class Sun servers through sd.conf management, and through the VxVM diskgroup import/deport mechanism. ECC SymCLI TimeFinder and BCV’s are also configured for 3rd mirror based data backup, snapshots etc.

 

On the NAS side of the configuration, multiple NFS volumes are hosted by multiple EMC Celerra File Servers. Performance and availability design consideration were accomplished through the combination of 2 to 1 active/passive data mover failover configurations, redundant 100Mbit ethernet links across different datamover NIC cards and Cisco switches, etc.

Based on Sun’s Jumpstart model, engineered a method for jumpstarting machines on Mail.COM’s global networks, without the need to configure jumpstart or even boot servers on remote segments. Traditional jumpstart environments minimally require a boot server on each client network segment. A result of this Light Weight Jumpstart Server design is that network segments no longer require (and in fact are prohibited from having) their own boot, configuration, and install servers. The configuration, install and boot server for all of global Mail.COM exists, and is maintained, in one location (New York). High speed lines are used to build servers all over the world. Because details of the build process are hidden from the remote (perhaps recently hired) administrator, as well as eliminating the requirement that he/she be familiar with jumpstart, Mail.COM has been able to quickly deploy new Solaris infrastructures around the world to exactly the same standard.

 


Oct 1998 - Aug 1999     Chase Manhattan Bank H.Q., New York, N.Y.
Global Systems Solaris/Unix Design Engineer (Consultant)

As a member of a three person Unix Global Engineering team, I designed Sun Solaris-based solutions for Chase’s global business groups. Projects include the design, test and building of a Veritas FirstWatch based HA disaster recovery platform for the Chase private key infrastructure (PKI) using multiple Sun machines connected to D1000s in multi-initiator fashion; SVR4 Packaging of products to be put into Chase's engineering build of Solaris.

 


Apr 1995 - Oct 1998     SUN MICROSYSTEMS INC., New York, NY
UNIX/Sun Integration & Support Engineer for Sun PS

 

As a senior member in the professional services division, provide technical, integration and consulting services for business clients that use SUN Solaris/Sparc-based computer and storage networks as their enterprise-wide client/server platform. The vast number of hardware & software products and customer UNIX configurations required of me a rapid learning curve, dedication and passion for what I did. As a result, I was awarded Sun Microsystems Northeast Area's Engineer of the Year award, and became a Senior Systems Support & Professional Services Engineer within my first year at the company. Here are a few of Sun Microsystem client assignments I architected and hands-on implemented:

 

For Alliance Capital Management Corp, I built an environment consisting of UEx00 enterprise servers with RSM-219s and EMC storage for a y2k testing lab.  Technologies included Symmetrix 3830, PowerPath, Veritas Volume Manager, etc. Wrote a C program that allows any user on a system to have independent control and view of the date & time without altering the UNIX system (kernel) date & time. This ability to change the date/time anywhere in the range "01/01/1970 00:00:00: - 01/19/2038 03:14:07" at the user shell level meant that multiple developers could test for Year 2000 compliance, each with a view of their own time (within their UNIX shell), without disturbing the REAL master system time.

 

For Mayer & Schweitzers Inc., I created the custom JumpStart server used to build their 100 position UltraSparc based trading desks.

 

For Olsten/Addeco Staffing Corporate HQ, as the systems architect and integrator, I deployed a Sun/EMC Enterprise-based infrastructure for year 2000 efforts. The hardware I configured are: ten E6000s, one E5000, one E4000, three RSM2000s, two Ultra2s, Sun to EMC connectivity (2.9 TeraBytes over two Symmetrix Storage Units: 3700 and 5700 models). With this equipment I have built from the ground up the following configurations:

 

Four E6000 HA Clusters (pairs) using Vertias FirstWatch HA and shared EMC storage over Ultra Wide Differential SCSI interfaces; configured carefully designed striped volumes using Veritas Volume manager and Veritas Filesystems to work optimally with Oracle 7.3.3 and 8; wrote supplement programs to the Veritas Oracle Agent API to start/stop the TNS listener during failover and to start/stop the HA monitoring of Oracle for DBA use when performing database maintenance. Configured dual ATM interfaces to support LECS and LanEmulation protocol for IP support over the ATM fabric. Configured an RSM2000 and associated LUNS on a E6000 on one of the HA pairs for private disk storage. This cluster doubles as a Load Test Generator (Load Runner) / Oracle Test Database pair. The extra RSM200 added the complexity of managing different disk controller and minor device numbers between the two machines when using Veritas Quick I/O for databases.

 

Two E6000 machines using Solstice DiskSuite and shared EMC storage over Ultra Wide Differential SCSI interfaces; stripped volumes were carefully configured to work optimally with Oracle 7.3.3 and 8; dual ATM similarly configured.

 

An E6000/RSM2000 and E4000/RSM2000. Created both RAID5 and RAID 0+1 striped LUNS for use with Oracle and for Print spooling.

 

Configured redundant Ultra2 pairs to serve corporate-wide DNS name services, home directories and standard application repositories (for automount), rdist distribution services to the Enterprise servers.

 

Installed Oracle 7.3.3 and created a basic database. With input from DBAs, created optimal database environments: in /etc/system, tune MINFREE and LOTSFREE kernel variables, exclude unnecessary modules from being loaded, set SCSI options for optimal disk I/O, create UFS and VXFS filesystems with appropriate block sizes.

 

Install, debug, configure and/or design the following UNIX based subsystems: installed and configured Solaris O/S, 10/100 BaseT & ATM Classical IP and LanEmulation based TCP/IP networks, RAID Volumes on Storage Array Disks using DiskSuite or Veritas, in memory tmpfs based filesystems, NIS and NFS services, Scripted backup schemes, User Accounts, Automatic Installation via JumpStart, Network Terminal Servers, Configured departmental intranets with HTTP and FTP services, Basic firewall with FireWall-1, High Availability Servers for NFS and Oracle Database & TNS Listener processes.

                                                                                                                                                    


Mar 1994 - Apr 1995   THE SUMITOMO BANK, LTD., (1) One World Trade Center, NYC
Trading Floor Systems Designer & Administrator

 

Deeply involved in the complete design and administration of a new 70 position trading floor during relocation from 1 World Trade Center to 270 Park Avenue. Relocation efforts were initiated after the WTC bombing of 1993. Responsibilities included design of a SUN-based 10baseT trading network using Sparc Server 20s and 1000s for the back-end and dual-headed Sparc 10s for the desktops. The UNIX portion of the rollout included the configuration of NIS, NFS/Automounter, OS and Application distribution via Jumpstart and/or Rdist, user account management, backup strategies. In addition to normal system Unix administration functions, supported the market data distribution platforms. This included managing UNIX platform-based data feeds from Dow Jones Telerate, Reuters, and Knight Ridder, as well as managing the market data distribution platform, the Teknekron TIB and MarketSheet products. Supported non-UNIX-based market data products including Reuters Dealing/2000 foreign exchange system, Knight Ridder Money Center, Telerate Teletrac, and Bloomberg charting packages. Most non-UNIX market data sources were displayed through a Reuters Prism+ video switch system which I also supported. Involved with the layout and specification of its cabling/ communications infrastructure: IDF, MDF closets, cat 5 cabling during the design and implementation of the trading floor.

 


Feb 1993 - Mar 1994     IMAGE PROCESSING SYSTEMS (a startup), New York, NY

Project Engineer/Junior Supervisor/Developer

 


May 1991-Sep. 1992      SARNOFF RESEARCH (RCA) LABS, Princeton, NJ
Thesis Research & Development for Masters Degree Software/Computer/DSP Hardware Design Engineer

 

Designed and built the DSP hardware board that predicts the signal vectors (i.e. pixels position, velocity and hue) for DirectTV broadcast signals. The output vectors are used to substitute for momentary signal loss during atmospheric disturbances. Wrote firmware in C to interface a UNIX serial port to a receiver board used in RCA's DirectTV development project. The interface provided the means by which to transfer test vector information to and from the embedded system under test. Developed C programs (test software) to predict the behavior of hardware being developed for a new broadcast system (DirecTV) designed to transmit several programs over a single satellite channel using MPEG compression techniques.

 


May 1990 - Jun 1991     GE Aerospace, Syracuse, NY
Communications Engineer
(secret clearance)

 

As member of a five person team, assisted in the design, coding and testing of software in 68030 assembly language and C to implement low level military protocols for the NAVY Seawolf Submarine defense project. These protocols, which provided a link between various physical listening devices on the outside of the ship and the ships main operating system include: RS-422, SCSI and NTDS (Navy Tactical Data Standards) protocols B & E.

 

 


Vendor and Open Source technologies & platforms used (Recent)

       - Apache Hadoop: Writing Map/Reduce jobs using Python-2 and the Pydoop API for Hadoop

       - Python2/3 and analysis tools: Python and Standard library, NumPy, SciPy, Scitools, matplotlib, ntlk, pandas, etc.

       - Apache PIG: Writing scripts to create & run parallel Map/Reduce workflows

       - MongoDB: Occasionally interacting with MongoDB via Python and the Pymongo API for ad-hoc data analysis

       - Amazon Web Services stack: EC2, Elastic Map/Reduce, S3, EBS, etc.

 

Vendor and Open Source technologies & platforms (Legacy / used in the past):

SUN Microsystems / Solaris:
       - Solaris / OpenSolaris features
       - Solaris Volume Manager,
       - IP Multipathing,
       - Sun Enterprise Volume Manager,
       - MPXIO Traffic Manager
       - Solstice HA / Sun Cluster 3.x
       - ZFS & Zones (Containers)

       - Solaris xVM

       - and more (truncated for brevity).

LINUX (Distribution neutral built from scratch for embedded compute systems):
       - Linux (A 2.4 or 2.6 kernel; A tiny filesystem; and a RAM disk)
       - Busybox
       - Ttylinux

        - XEN Virtualization

EMC:
       - Symmetrix / Symmetrix DMX
       - Celerra
       - Clarrion

        - Clariion Navisphere 6.x
       - EMC Control Center (ECC 6.x)
       - SymCLI/Solutions Enabler: symmir; symconfigure; symmask; symdev; symoptmz; etc.
       - Celerra: NAS Command Line Interface
       - Connectrix SAN Directors and switches
       - FibreZone for Solaris (FibreZone & Volume Logixfzone; fpath; symmask; symmaskdb)
       - Volume Logix / Symmask(db) (HBA Symmetrix Hypervolume Access Security)
       - Power Path for Solaris (for mutipath redundancy & load balance).

       - and more (truncated for brevity).

BROCADE / CISCO / McDATA:
       - Cisco MDS95xx FibreChannel Directors
       - Brocade Silkworm line; 12xxx, 48xxx Directors; and Multi-protocol routers/gateways
       - McData Spherion ES3232; and Directors
       - SANavigator 4.x, SANPilot, and other management tools.

        - Nishan IPS3000 SoIP (Storage over IP)

        - ISL's, IFL's, Trunking

VERITAS:
       - Veritas Volume Manager (VxVM)
       - Veritas FileSystem (VxFS)
       - Veritas Volume Replicator (VVR)

       - Veritas Cluster Server (VCS)
       - Veritas Quick I/O for databases
       - Veritas FirstWatch HA (VxFW)
                                 

Hitachi HDS PRODUCTS:
       - Hitachi HDS9960/70/80/90

       - Remote Console
       - Hi Command
       - HORCM CCI Command Line Interface
       - SVP (Hitachi array configuration software)
       - SANtinel (Lun Masking)
       - TrueCopy

LANGUAGES: Unix script utilities (ksh/bash, sed, awk, & piped unix commands), C, Assembler, PERL, Java

CORE OS Utilities:
NFS, NIS, DNS, Solaris Jumpstart, TCP/IP, ufsdump, rdist, rsync, account admin, patch & package administration, Solaris Package creation, SSH, XNTP, Squid Proxy Server, Scripting, etc. (and more -- truncated for brevity).

HARDWARE SYSTEMS (truncated version for brevity):
UltraSparc 1, 2, 5, 10, 60 Desktops
Ultra Enterprise Servers 3/4/5/6500
Sun Fire Enterprise Servers 420, v480, v880, 3800, 4800, 4810, 6800
Sun Storage: RSM2000, D1000’s, A1000’s, A5000’s, T3+
Digi CM16/32 Console Servers
EMC Symmetrix: DMX, 8730 & 8430 series, EC1000/ED1032, Connectrix(McData) Switches, DS16M McDATA Switch, DS16B Brocade Switch, Celerra File Servers, Enterprise Storage & Solutions (SRDF, TimeFinder, SymmOptimizer, SDR, etc), JNI/Qlogic/Emulex fibre boards and configuration, Clariion Storage, ECC 5.x, Hitachi HDS9960/70, Spherion McData ES3232 Fibre Channel Switch, Brocade 3800 FC Switches, Adva FSP-3000 DWDM, nStor/Xyratex 49xxx Fibre Channel & 52xxx SATA based storage arrays, Qlogic SB5202 Stackable FC Switches; Cisco MD-S9506 Directors