👋🏻👨🏼‍💻‍🎬

I'm a software developer with 15+ years experience in the Media and Entertainment Industry


Before diving into code, I was a Video Director and wore many hats in post-production. Now, I help shape the future of media conversions in the cloud with Cinnafilm's PixelStrings Platform.

When I'm not coding, I'm creating with LineDream, my generative art library for Python, or developing Asset Veranda, a media asset management tool.

Curious about my past life as a video director? Check out my portfolio here.


About Me

My name is Marc Leonard. I am a creative professional and software engineer with a unique background that bridges the media and entertainment industry with software development.

In 2010, I graduated from DeSales University in Center Valley, PA, with a major in Television/Film and a minor in Business. Early in my career, I spent two years as the lead editor of special programming for Reelz Channel. Following that, I freelanced for various online media outlets and brands such as Reddit, Pop Sci, and The Verge. I also worked as a colorist for feature films, short-form content, and dailies.

My career took a turn towards the Outdoor Industry, where I started The Crux Collective, a short-lived outdoor news aggregator, and founded Rove Media, a one-man media consultancy. Through Rove Media, I created videos and materials for brands like L.L.Bean, Outside Online, Patrol USA, and GoPro.

Since 2017, I have been working as a Senior Software Developer at Cinnafilm, contributing to the development of PixelStrings, a cloud-based video processing platform. My role involves designing and developing the API and backend, orchestrating cloud infrastructure, and maintaining a horizontally scalable video transcoding system. I leverage my proficiency in Python, JavaScript, TypeScript, and HTML/CSS, along with frameworks like Flask, Django, and FastAPI, to build robust and maintainable applications.

In addition to my professional work, I am the creator of LineDream, a Python generative art library designed for SVG outputs and originally intended for pen plotter art. I am also developing a media asset management software called Asset Veranda.

I am currently enjoying the fruits of living in Bozeman, Montana where I enjoy skiing, mountain biking, and hiking.

Experience
  • 2017 - Present

    Cinnafilm Inc.

    Core backend developer for PixelStrings. PixelStrings is a cloud video conversion platform for studios, broadcasters, independent creators, and post-production facilities
  • 2023 - Present

    Asset Veranda

    Developer of Asset Veranda - a media asset management tool for producers and editors.
  • 2020 - Present

    LineDream

    Creator and maintainer of LineDream - a generative art library for Python.
  • 2012 - 2019

    Video Director & Post Production Specialist

    For 7 years I worked as a freelancer for many companies and productions. For a comprehensive overview of my work, please visit this link.
  • 2012 - 2014

    Instructor

    Adobe Premiere and Adobe After Effects instructor for the University of New Mexico Continuing Education program.
  • 2011 - 2013

    Reelz Channel

    Lead Editor of special programming
  • 2006 - 2010

    DeSales University

    Major - Television/Film
    Minor - Business
Professional Skills
  • Languages: Python, JavaScript, TypeScript, HTML/CSS, C++, C# (working knowledge)
  • Frameworks: Flask, Django, FastAPI, SQLAlchemy, Pydantic, Pytest, boto3, Angular, Vue, HTMX
  • Audio/Video Libraries: MainConcept, SRT, FFmpeg, gstreamer, miniaudio, Blackmagic RAW SDK
  • Tools: Terraform, Ansible, Docker, Linux, Nginx, AWS, Azure
  • Databases: MySQL, PostgreSQL, SQLite, MongoDB, Redis
Latest News
  • Understanding Python Object Scope Lifecycle

    In Python, understanding how objects are managed in memory can be crucial for writing efficient and bug-free code. One aspect of this management involves the lifecycle of objects—specifically, when they are created, used, and deleted. Using the object lifetime in regards to it's scope can be incredibly useful, but also comes with a pitfall.

    Example: Timing Execution with a Class

    Consider the following class FuncTime, which measures the execution time of a block of code:

    import time
    
    class FuncTime:
        def __init__(self):
            self.start = time.time()
    
        def __del__(self):
            ex_time = time.time() - self.start
            print(f"Execution Time (sec) - {ex_time}")
    

    The FuncTime class records the start time when an instance is created and calculates the elapsed time when the instance is destroyed.

    Working Code Example

    Here’s a function something that demonstrates how this class can be used to time nested function calls:

    import time
    
    class FuncTime:
        def __init__(self):
            self.start = time.time()
    
        def __del__(self):
            ex_time = time.time() - self.start
            print(f"Execution Time (sec) - {ex_time}")
    
    def another():
        a = FuncTime()
        time.sleep(2)
    
    def something():
        b = FuncTime()
        time.sleep(1)
        another()
    
    if __name__ == "__main__":
        something()
    

    In this example, the FuncTime instances are assigned to variables (a and b). When the functions another and something exit, the variables go out of scope, and the instances are destroyed. This triggers the __del__ method, printing the execution time:

    Execution Time (sec) - 2.0050549507141113
    Execution Time (sec) - 3.0054609775543213
    

    What Happens When We Don't Assign Variables?

    Let’s modify the code slightly by removing the variable assignments:

    def another():
        FuncTime()
        time.sleep(2)
    
    def something():
        FuncTime()
        time.sleep(1)
        another()
    
    if __name__ == "__main__":
        something()
    
    Execution Time (sec) - 5e-06
    Execution Time (sec) - 2.2e-05
    

    You might expect the same output, but it's a little different... Why? The answer lies in Python's garbage collection. Since the object are not assigned explicitly to variables, the garbage collector will be responsible for picking up the stay objects, and hence the timers can not be trusted.

    Conclusion

    Understanding the lifecycle of objects in Python is key to managing resources effectively. The first example works because the objects remain in scope until the variables are no longer needed. The second example fails to print execution times immediately due to the indeterminate timing of garbage collection.

    For more predictable resource management, consider using context managers (with statements), which provide deterministic cleanup of resources.

    By grasping these concepts, you can write more robust and efficient Python code, ensuring that resources are managed properly.

  • Running a CPU heavy task, asyncronously, in FastAPI

    import time
    import asyncio
    import uuid
    from concurrent.futures.process import ProcessPoolExecutor
    
    import fastapi
    from pydantic import BaseModel, Field
    import uvicorn
    
    app = fastapi.FastAPI()
    
    NUM_SECONDS_TO_WORK = 20
    
    def saturate_cpu(name):
        print(f"[{name}] - saturating CPU")
        s = time.time()
        while True:
            if time.time() - s > NUM_SECONDS_TO_WORK:
                print(f"[{name}] - {NUM_SECONDS_TO_WORK} seconds of work has completed.")
                return True
    
    class DB:
        def __init__(self):
            self.work = {}
    
        async def get_work(self):
            return self.work
    
    db = DB()
    
    @app.get("/")
    def get_work():
        loop = asyncio.new_event_loop()
        resp = loop.run_until_complete(db.get_work())
        return resp
    
    class WorkPayload(BaseModel):
        num_cps: int = Field(1, description="The amount of CPUs to saturate while doing work.")
    
    @app.post("/do-work", description="Work will be done to saturate a CPU.")
    async def do_work(work_payload: WorkPayload):
        work_id = str(uuid.uuid4())
        db.work[work_id] = "Working!"
        event_loop = asyncio.get_event_loop()
    
        with ProcessPoolExecutor() as p:
            workers = []
            for work_num in range(work_payload.num_cps):
                working_proc = event_loop.run_in_executor(p, saturate_cpu, work_num)
                workers.append(working_proc)
    
            await asyncio.gather(*workers)
        db.work[work_id] = "Done"
        return {"msg": "work has completed."}
    
    
    if __name__ == "__main__":
        uvicorn.run(app, workers=1)
    
  • Calling Objective C from C++

    Asset Veranda, the media asset manager I'm working on, uses largely platform-independent code. However, there are some specific functions that need to be called that are platform-specific.

    For instance, there is a way to display a file in its native file browser (Finder for macOS or Windows Explorer for Windows). In the case of Windows, this is easy. Include the Windows header file and make a few COM calls. But in the case of macOS, you need to call either their Swift or Objective-C APIs.

    Below, I've outlined the simplest way to do this using the CMake build system.

    main.cpp

    #include <iostream>
    
    #include "ApplePlatformOps.h"
    
    int main() {
        // wait for user input
        std::cout << "Press Enter to continue..." << std::endl;
        std::string filePath;
        ASelectFolder(filePath);
        std::cout << "Selected: " + filePath << std::endl;
        std::cin.get();
        return 0;
    }
    

    ApplePlatformOps.h

    #include

    void ASelectFolder(std::string& filePath);

    ApplePlatformOps.mm

    #include "ApplePlatformOps.h"
    #include <Cocoa/Cocoa.h>
    
    void ASelectFolder(std::string& filePath) {
        NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
        NSOpenPanel *openPanel = [NSOpenPanel openPanel];
    
        [openPanel setCanChooseFiles:NO];
        [openPanel setCanChooseDirectories:YES];
        [openPanel setAllowsMultipleSelection:NO];
    
        NSInteger result = [openPanel runModal];
    
        if (result == NSModalResponseOK) {
            NSURL *url = [openPanel URL];
            NSString *pathString = [url path];
            const char *utf8String = [pathString UTF8String];
            filePath = std::string(utf8String);
        }
    
        [pool release];
    }
    

    CMake file

    cmake_minimum_required(VERSION 3.10)
    project(MyProject)
    
    
    set(CMAKE_CXX_STANDARD 17)
    
    # Add the path to the FindCocoa.cmake module
    set(CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} "${CMAKE_SOURCE_DIR}/cmake/")
    
    # Specify the source files
    set(SOURCE_FILES
            main.cpp
    )
    
    set(HEADER_FILES
    )
    
    if(APPLE)
        # Add the platform specific files
        list(APPEND SOURCE_FILES
                ApplePlatformOps.mm
                ApplePlatformOps.h
        )
        # Add the platform specific headers
        list(APPEND HEADER_FILES
                ApplePlatformOps.h
        )
    
        find_library(COCOA_LIBRARY Cocoa)
    
    endif()
    
    add_executable(MyProject ${SOURCE_FILES} ${HEADER_FILES})
    
    
    if(APPLE)
        # Link the necessary libraries
        target_link_libraries(MyProject ${COCOA_LIBRARY})
    endif()
    
  • There are a lot of opinions (and recommended practices) on how to manage Terraform code between development and production environments. One common approach is to have separate Terraform modules (folders) for each environment. This can be a good approach, but it can also lead to a lot of duplication and potential for drift between the environments. I went down the rabbithole to determine the most ergonomic way to accomplish this.

    I found that the lowest common denominators to accomplish this are: - Use the AWS_PROFILE environment variable to switch between AWS accounts. - Use the terraform workspace command to switch between environments (corresponding to the AWS accounts). - If you need to create a json outputs file, use the AWS_PROFILE environment variable to prepend the name of the output file.

    Obviously, you can put these commands in a script to link them together, but these are the main steps. The workflow goes as follows:

    1. Create a new workspace for each environment.
      • terraform workspace new dev
      • terraform workspace new prod
    2. Set up two AWS profiles in your ~/.aws/credentials file.
      • dev
      • prod
    3. Set the AWS_PROFILE environment variable to the desired profile.
      • export AWS_PROFILE=dev
    4. Switch to the desired workspace.
      • terraform workspace select ${AWS_PROFILE}
    5. Apply the Terraform code.
    6. terraform apply
    7. If you need to create a json outputs file, use the AWS_PROFILE environment variable to prepend the name of the output file.
      • terraform output -json > ${AWS_PROFILE}_outputs.json

    Using this approach, there are a few concessions you will need to make, such as: - You will need to remember to set the AWS_PROFILE environment variable before running any Terraform commands. - You will need to remember to switch to the correct workspace before running any Terraform commands. - You will need to remember to prepend the AWS_PROFILE environment variable to the output file name if you need to create a json outputs file.

    But, even given those, I found this to be the best way to simultaneously manage Terraform code between development and production environments.

  • For posterity, here are the things I read, listened to, and used this past year.

    Books

    • The Priory Of The Orange Tree
    • Never Let Me Go

    Podcasts

    • Software Defined Talk
    • Python Bytes
    • Talk Python To Me
    • Merge Conflict

    Gadgets

    • Drawing Tablet

    Other

    • Woodworking