[{"content":"","date":"28 November 2025","externalUrl":null,"permalink":"/categories/artificial-intelligence/","section":"Categories","summary":"","title":"Artificial Intelligence","type":"categories"},{"content":"","date":"28 November 2025","externalUrl":null,"permalink":"/tags/artificial-intelligence/","section":"Tags","summary":"","title":"Artificial Intelligence","type":"tags"},{"content":"","date":"28 November 2025","externalUrl":null,"permalink":"/","section":"Ben Yu","summary":"","title":"Ben Yu","type":"page"},{"content":"","date":"28 November 2025","externalUrl":null,"permalink":"/categories/blog/","section":"Categories","summary":"","title":"Blog","type":"categories"},{"content":"","date":"28 November 2025","externalUrl":null,"permalink":"/tags/blog/","section":"Tags","summary":"","title":"Blog","type":"tags"},{"content":"","date":"28 November 2025","externalUrl":null,"permalink":"/categories/","section":"Categories","summary":"","title":"Categories","type":"categories"},{"content":"","date":"28 November 2025","externalUrl":null,"permalink":"/posts/","section":"Posts","summary":"","title":"Posts","type":"posts"},{"content":"","date":"28 November 2025","externalUrl":null,"permalink":"/categories/rust/","section":"Categories","summary":"","title":"Rust","type":"categories"},{"content":"","date":"28 November 2025","externalUrl":null,"permalink":"/tags/rust/","section":"Tags","summary":"","title":"Rust","type":"tags"},{"content":"","date":"28 November 2025","externalUrl":null,"permalink":"/tags/","section":"Tags","summary":"","title":"Tags","type":"tags"},{"content":"Over the past couple of weeks, I continued my journey to learn Rust by build a Redis clone from scratch. I signed up for CodeCrafters, who were offering their Build your own Redis for free during the month of October. The goal of the course wasn\u0026rsquo;t to create a production-ready alternative to Redis, but rather to work on a challenging problem to build a deeper understanding of your favourite programming language and deal with challenges with larger projects and codebases. Given the time constraints, I used this as an opportunity to build my own intiution and workflow with Claude Code and AI-assisted coding.\nMy Redis clone was build with a TCP server that handled multiple concurrent connections via a thread pool. The core components include:\nA RESP protocol parser Shared in-memory store with Arc\u0026lt;Mutex\u0026gt; for thread-safe access Transaction support with MULTI/EXEC/DISCARD List \u0026amp; Stream Operations Replication support with leader-follower configuration Pub/Sub support for message broadcasting Concurrency and Thread Safety\nManaging concurrent access to shared state is a difficult challenge in most programming languages. Rust\u0026rsquo;s philosophy is to leverage ownership and type checking, which helps boil down concurrency errors into compile-time errors. This made it particularly easier to implement via agentic coding since it shortens the validation loop for your agent. Two common patterns for handling concurrency are:\nShared-state concurrency, where multiple threads have access to some piece of data Message-passing concurrency, where channels send messages between threads I ended up using both patterns in my implementation to handle shared memory states for building replication support and implementing Pub/Sub.\nShared-State Concurrency\nThe first way of handling concurrent access most folks learn is via mutual exclusion. The core idea is to only allow only one thread access to a piece of data at any given time. This is usually implemented via a lock or mutex, which is a data structure that keeps track of who currently has exclusive access to the data. Mutexes have a reputation for being difficult to use because you have to remember to attempt to acquire the lock before using the data and remember to unlock the data when you\u0026rsquo;re done.\nlet store = Arc::new(Mutex::new(Store::new()));\n// Clone for each connection handler let store_clone = Arc::clone(\u0026amp;store); pool.execute(move || { handle_connection(stream, store_clone, \u0026hellip;); });\n\u0026hellip;\n// SET command acquires lock first before saving in store Command::Set(key, value, expiry_ms) =\u0026gt; { let mut store = store.lock().unwrap(); let expiry = expiry_ms.map(|ms| Instant::now() + Duration::from_millis(ms)); store.set(key, value, expiry); \u0026ldquo;+OK\\r\\n\u0026rdquo;.to_string() }\nRust\u0026rsquo;s ownership system makes mutexes much safer to use. The mutex owns the data it protects, and you can only access the data by calling lock(), which returns a MutexGuard. This guard implements Drop, so when it goes out of scope, the lock is automatically released! The compiler prevents you from accessing the data without holding the lock, eliminating entire classes of bugs.\nMessage Passing \u0026amp; Pub/Sub\nAnother popular approach to implemnting safe concurrency is through message passing, where threads communicate by sending data to each other. Rust\u0026rsquo;s standard library provides an implementation of channels for message passing concurrency through std::sync::mpsc (multi-producer, single-consumer). A channel has two halves: a transmitter and a receiver. One part of your code calls methods on the transmitter with the data you want to send, and another part checks the receiving end for arriving messages. A channel is said to be closed if either the transmitter or receiver half is dropped. For our Redis Pub/Sub implementation, each subscriber connection gets its own receiver end of a channel, while the publisher holds sender ends that can be cloned and distributed:\nstruct PubSub { // Map of channel name to list of (connection_id, sender) pairs channels: HashMap\u0026lt;String, Vec\u0026lt;(usize, mpsc::Sender\u0026lt;(String, String)\u0026gt;)\u0026raquo;, }\nWhen a message is published, we send it through all the senders for that channel. Each connection in subscribe mode receives a dedicated channel receiver for push-based message delivery:\n// Create a channel for receiving published messages if not already created if subscription_tx.is_none() { let (tx, rx) = mpsc::channel::\u0026lt;(String, String)\u0026gt;(); subscription_tx = Some(tx); subscription_rx = Some(rx); }\nlet tx = subscription_tx.as_ref().unwrap();\n// Subscribe to each channel let mut pubsub_lock = pubsub.lock().unwrap();\nfor channel in \u0026amp;channels { if !subscribed_channels.contains(channel) { pubsub_lock.subscribe(channel.clone(), connection_id, tx.clone()); subscribed_channels.insert(channel.clone()); }\n// Send subscription confirmation for each channel // Format: *3\\r\\n$9\\r\\nsubscribe\\r\\n$\u0026lt;channel_len\u0026gt;\\r\\n\u0026lt;channel\u0026gt;\\r\\n:\u0026lt;count\u0026gt;\\r\\n let response = format!( \u0026quot;*3\\r\\n$9\\r\\nsubscribe\\r\\n${}\\r\\n{}\\r\\n:{}\\r\\n\u0026quot;, channel.len(), channel, subscribed_channels.len() ); stream.write_all(response.as_bytes()).ok(); }\ndrop(pubsub_lock); Ok(())\nThe receiver on each connection polls for messages without blocking other operations:\n// Check for published messages if subscribed if let Some(ref rx) = subscription_rx { match rx.try_recv() { Ok((channel, message)) =\u0026gt; { // Only forward messages for channels we\u0026rsquo;re still subscribed to if subscribed_channels.contains(\u0026amp;channel) { // Send published message to subscriber // Format: *3\\r\\n$7\\r\\nmessage\\r\\n$\u0026lt;channel_len\u0026gt;\\r\\n\\r\\n$\u0026lt;msg_len\u0026gt;\\r\\n\\r\\n let response = format!( \u0026ldquo;*3\\r\\n$7\\r\\nmessage\\r\\n${}\\r\\n{}\\r\\n${}\\r\\n{}\\r\\n\u0026rdquo;, channel.len(), channel, message.len(), message ); if stream.write_all(response.as_bytes()).is_err() { break; } } continue; // Check for more messages } Err(mpsc::TryRecvError::Empty) =\u0026gt; { // No messages, continue to read commands } Err(mpsc::TryRecvError::Disconnected) =\u0026gt; { // Channel closed break; } } }\nTask-Driven Coding with Agents\nWhile working with Claude Code, I found certain coding practices translated to better performance and accuracy:\nTask-Driven Coding - Try to scope tasks and prompts to the smallest unit of work you can. Larger plans add more complexity and have a higher risk of not producing what you\u0026rsquo;d expect Write Lots of Tests - Adding unit and integration tests lets your agents fully validate their changes and tightens the feedback loop. Rust\u0026rsquo;s compiler errors helped this even further with most issues surfaced immediately as compile-time errors with actionable fixes Explicit is better than clever - Agents seem to prefer simplicity and readibility over more complicated abstractions. It\u0026rsquo;s more valuable to have more verbose implementations and documentation. Smaller single responsibility functions My workflow became: describe what you want → AI writes code → compiler validates → iterate only on logic errors. This tight feedback loop meant I could move quickly through implementation details while maintaining correctness.\nStruggling with Leader-Follower Replication\nOne of the more interesting features CodeCrafters asks you to implement is Leader-Follower Replication. This feature allows replica Redis instances to be exact copies of a leader instance. At a high level each instance does the following:\nLeader Instance:\nAccepts PSYNC command from replicas Sends a FULLRESYNC response with an empty RDB file Propagates write commands to all connected replicas Implements WAIT command to ensure replicas acknowledge writes Replica Instance:\nPerform a three-way handshake (PING, REPLCONF, PSYNC) Receives and applies write commands from master Tracks replication offset for acknowledgments Responds to REPLCONF GETACK queries The replication offset tracking was particularly nuanced. Replicas need to track how many bytes they\u0026rsquo;ve processed and respond with their offset when the leader instance requests acknowledgment.\nWhen trying to implement this feature, I found that Claude struggled compared to some of the simpiler aspects of the Redis specification. In particular, I kept hitting snags when handling:\nReplication Offset Tracking - Replicas must count RESP protocol bytes for each command. The AI incorrectly assumed you count bytes for each batch Duplicate Stream Reading - After PSYNC, the replica connection serves dual purposes: it\u0026rsquo;s both a command receiver and a replication stream. The AI struggled to reason about when to read from the connection versus when it should be passive. WAIT Command Stream Handling - Implementing WAIT and write acknowledgement was non-trivial. You had to Clone stream once per WAIT operation instead of multiple times, force blocking mode before reading, and handle retries on WouldBlock errors. What I found usually broke myself and Claude out of the loop was similar to working with any other team member on a complicated programming task or bug:\nExtensive debug output - Inserting print statements for every read/write with byte counts made concurrency issues triagable More tests - Each failing test case became a specific, narrow problem that your AI could validate against Lead your AI to a potential solution- Claude seemed to be struggling with offset tracking, when I realized it was just implementing the specification inccorectly. Adding a parse_commands_with_sizes() function helped it break out of that loop immediately Once this foundation existed, the rest fell into place because the AI could reason about offset tracking locally within each function rather than across the whole system.\nConclusion\nBuilding this Redis clone was an incredible learning experience. It deepened my understanding of concurrent programming in Rust, protocol design and trade-offs in distributed systems. It also helped me develop an intuition for agentic coding. For a large coding project to be succesful, it\u0026rsquo;s more important to think deeply about your interfaces and abstractions, your codebase\u0026rsquo;s testability and feedback loop and how explicit and readable is your code and documentation? These are all good software engineering practices regardless of whether you\u0026rsquo;re working with AI, but with AI-assisted development, these best practices are immediately valuable since your feedback loop is now so much tighter. Bad structure produces confused AI output quickly; good structure lets you move fast while maintaining quality.\nMy project\u0026rsquo;s full source code is available on GitHub. If you\u0026rsquo;re interested in database internals or Rust systems programming, I highly recommend building something similar!\nResources\nCodeCrafters Redis Challenge Redis Protocol Specification Redis Commands Documentation The Rust Book Have questions or suggestions? Feel free to open an issue on the GitHub repo or reach out!\n","date":"28 November 2025","externalUrl":null,"permalink":"/posts/writing-a-redis-clone-in-rust-learning-to-vibe-code-with-constraints/","section":"Posts","summary":"","title":"Writing a Redis Clone in Rust - Learning to vibe code with constraints","type":"posts"},{"content":"The Sympathizer has been one of the most shocking books I\u0026rsquo;ve read in the last decade. Viet Thanh Nguyen\u0026rsquo;s Pulitzer Prize-winning novel forces readers into an uncomfortable position: witnessing sexual violence they cannot prevent, mirroring the protaganist\u0026rsquo;s own paralysis as he watches South Vietnamese policemen rape a female communist agent. This brutal and explicit scene raises a question I\u0026rsquo;ve found myself struggling with: Is depicting such acts of horror necessary or does it reproduce the very violence it critiques?\nNguyen has been remarkably candid about his choice. In an interview in 2025 he said: \u0026ldquo;Two-thirds of the way through the novel, I realized who my narrator was [\u0026hellip;] I liked him a lot\u0026hellip; But I also had to understand that he was misogynistic and masculine, and that I was enjoying that as a writer.\u0026rdquo; This realization led him to push the narrator\u0026rsquo;s casual objectification of women to its logical extreme: sexual violence. The rape scene functions as moral reckoning. Readers who enjoyed the Captain\u0026rsquo;s womanizing and masculine camaraderie must confront where that spectrum of misogyny leads. As Nguyen argued, \u0026ldquo;If I didn\u0026rsquo;t go there, I would be making a mistake, and if I did go there, I would be making a lot of people uncomfortable—but that is actually what they should feel.\u0026rdquo; I am still confronting this uncomfort weeks after reading the final chapters of this novel.\nThe communist agent\u0026rsquo;s name—\u0026ldquo;Viet Nam\u0026rdquo; makes explicit what scholar Sylvia Shin Huey Chong identifies as the novel\u0026rsquo;s central tension: Vietnamese women\u0026rsquo;s bodies serve as the site where violence against the nation is enacted and understood. The rape of Vietnamese women by Vietnamese men represents not just American imperialism but the internalized violence colonialism leaves behind.\nThis connects to the novel\u0026rsquo;s critique of orientalist representation. The Hollywood film-within-the-novel, \u0026ldquo;The Hamlet,\u0026rdquo; includes a rape scene the Auteur defends as \u0026ldquo;realistic\u0026rdquo; because \u0026ldquo;rape happens\u0026rdquo; in war. Yet Nguyen\u0026rsquo;s novel then includes its own rape scene, creating what Chong calls an \u0026ldquo;unresolvable ethical bind\u0026rdquo;. The protaganist in the novel doesn\u0026rsquo;t commit rape, but he witnesses it, unable or unwilling to intervene. This distinction proves crucial. The novel explores how war implicates those who watch, benefit from, or fail to prevent atrocity. The Captain holds a Coke bottle throughout the rape, which the policemen then use to violate the woman. His passivity becomes weaponized. During the protagonist\u0026rsquo;s own torture in a reeducation camp, his interrogator Man forces him to remember this complicity. The torture serves not to extract information but to make the Captain acknowledge: you could have acted, you didn\u0026rsquo;t, and that silence makes you guilty. For most Americans, Nguyen suggests, this is how war matters, not through direct violence but through comfortable distance.\nSlyvia Chong asks: \u0026ldquo;Is it possible to write war without traveling through the abject trope of rape?\u0026rdquo; The novel\u0026rsquo;s treatment of female characters, primarily as victims whose suffering catalyzes male moral awakening, suggests the answer may be no, at least within the masculine war narrative tradition Nguyen both inhabits and critiques. The novel declares: \u0026ldquo;Massacre is obscene. Torture is obscene. Three million dead is obscene.\u0026rdquo; By this logic, being disturbed by sexual content while accepting mass violence reveals our moral failures. Yet this argument doesn\u0026rsquo;t resolve whether replicating gendered violence serves justice or simply aestheticizes suffering and misogyny.\nWhat remains is discomfort, which might be precisely Nguyen\u0026rsquo;s point. We cannot look away, cannot claim innocence, cannot separate the pleasures of narrative from complicity in violence. The question isn\u0026rsquo;t whether sexual violence makes us uncomfortable, but whether our discomfort produces a deeper understanding of society\u0026rsquo;s broader problems or merely voyeurism and immoral entertainment.\n","date":"2 November 2025","externalUrl":null,"permalink":"/posts/book-review-the-sympathizer/","section":"Posts","summary":"","title":"Book Review: The Sympathizer","type":"posts"},{"content":"When I first picked up Thorsten Ball\u0026rsquo;s excellent book \u0026ldquo;Writing an Interpreter in Go\u0026rdquo;, I knew I wanted to tackle the challenge of implementing the Monkey programming language. However, instead of following along in Go, I decided to take the opportunity to improve on my Rust!\nWhat is Monkey?\nMonkey is a C-like programming language designed specifically for learning about interpreters and language implementation. It includes:\nVariable bindings: let x = 5; Integers and booleans: 42, true, false Arithmetic expressions: +, -, *, / Built-in functions: len(), first(), last(), rest(), push() First-class functions: Functions are values that can be passed around Higher-order functions: Functions that take other functions as arguments Closures: Functions that capture their environment Here\u0026rsquo;s a taste of what Monkey looks like:\n// Fibonacci function with closures let fibonacci = fn(x) { if (x == 0) { 0 } else { if (x == 1) { 1 } else { fibonacci(x - 1) + fibonacci(x - 2) } } };\nfibonacci(10); // =\u0026gt; 55\nArchitecture Overview\nMy Rust implementation follows the classic interpreter architecture with three main components:\nLexer - tokenizes the source code\nParser - builds an Abstract Syntax Tree (AST)\nEvaluator - walks the AST and executes the program\nLexer\nThe lexer converts a stream of characters into tokens—the first step in understanding any program. Think of it as breaking down a sentence into individual words and punctuation marks, but for code.\nThe core responsibility is simple: define your set of supported tokens, scan through your program character by character, and output a list of tokens that the parser can work with. Here\u0026rsquo;s how I defined the token types in my implementation:\n#[derive(Debug, PartialEq, Clone)] pub enum Token { Ident(String), Integer(i32), True, False, Illegal, Eof, Equal, Plus, Comma, Semicolon, LParen, RParen, LBrace, RBrace, Function, Let, Assign, Bang, Dash, ForwardSlash, Asterisk, NotEqual, LessThan, GreaterThan, Return, If, Else, }\nRust\u0026rsquo;s enum system was perfect for representing tokens with values (like Ident(String) and Integer(i32)). This made my Lexer implementation extremely simple:\nimpl Lexer { pub fn next_token(\u0026amp;mut self) -\u0026gt; Token {\nself.skip_whitespace(); let tok = match self.cur_char { b'{' =\u0026gt; Token::LBrace, b'}' =\u0026gt; Token::RBrace, b'(' =\u0026gt; Token::LParen, b')' =\u0026gt; Token::RParen, b',' =\u0026gt; Token::Comma, b';' =\u0026gt; Token::Semicolon, b'+' =\u0026gt; Token::Plus, b'-' =\u0026gt; Token::Dash, b'!' =\u0026gt; { if self.peek_char() == b'=' { self.read_char(); Token::NotEqual } else { Token::Bang } }, b'\u0026gt;' =\u0026gt; Token::GreaterThan, b'\u0026lt;' =\u0026gt; Token::LessThan, b'*' =\u0026gt; Token::Asterisk, b'/' =\u0026gt; Token::ForwardSlash, b'=' =\u0026gt; { if self.peek_char() == b'=' { self.read_char(); Token::Equal } else { Token::Assign } }, 0 =\u0026gt; Token::Eof, c =\u0026gt; { if Self::is_letter(c) { let id = self.read_identifier(); return match id.as_str() { \u0026quot;fn\u0026quot; =\u0026gt; Token::Function, \u0026quot;let\u0026quot; =\u0026gt; Token::Let, \u0026quot;true\u0026quot; =\u0026gt; Token::True, \u0026quot;false\u0026quot; =\u0026gt; Token::False, \u0026quot;if\u0026quot; =\u0026gt; Token::If, \u0026quot;else\u0026quot; =\u0026gt; Token::Else, \u0026quot;return\u0026quot; =\u0026gt; Token::Return, _ =\u0026gt; Token::Ident(id), }; } else if c.is_ascii_digit() { let id = self.read_number(); return Token::Integer(id); } else { Token::Illegal } } }; self.read_char(); return tok; } }\nReading tokens became a simple matter of using pattern matching, incrementing your current cursor position.\nParser Now that we have a sequence of input tokens, we can build our parser which takes input tokens and converts them into a data structure representation, most typically an Abstract Syntax Tree (AST). This tree structure represents the hierarchical syntax of the program in a way that will make it easy for our evaluator to traverse.\nWe implement a Recursive Descent Parser, commonly known as a Top-Down Operator Precedence parser or Pratt Parser. This approach starts at the root node of the AST and descends down, mirroring how we naturally think about ASTs. While it\u0026rsquo;s not the fastest parsing method and lacks formal proof of correctness, it\u0026rsquo;s the easiest to learn and implement.\nI defined the AST nodes using enums and structs:\n#[derive(Debug, Clone)] pub enum Statement { Let(String, Expression), Return(Expression), Expr(Expression), }\n#[derive(Debug, Clone)] pub enum Expression { Ident(String), Lit(Literal), Prefix(Token, Box), Infix(Token, Box, Box), If(Box, BlockStatement, Option), Function(Vec, BlockStatement), FunctionCall(Box, Vec), }\nStatements\nParsing statements is relatively straightforward, we process tokens from left to right, expect or reject the next token, and if it matches our expectations, we return an AST node. A typical let statement let x = a + b;for example:\nfn parse_let_statement(\u0026amp;mut self) -\u0026gt; Result\u0026lt;Statement, ParserError\u0026gt; { let ident = match \u0026amp;self.peek_token { Token::Ident(id) =\u0026gt; id.clone(), t =\u0026gt; { return Err(self.error_no_identifier(t)); } };\n// Consume identifier self.next_token(); self.expect_peek_token(\u0026amp;Token::Assign)?; self.next_token(); let expr = self.parse_expression(Precedence::Lowest)?; if self.peek_token_is(\u0026amp;Token::Semicolon) { self.next_token(); } Ok(Statement::Let(ident, expr)) }\nExpressions \u0026amp; Pratt Parsing\nThe real challenge comes with parsing expressions! Even with a simple example like: 2 + 3 * 4 where operator precedence and associativity matter. This is where Top-Down Operator Precedence (Pratt Parsing) shines. The elegance of Pratt parsing lies in its simplicity and extensibility:\nSingle parsing function: One parse_expression function handles all precedence levels Natural handling of associativity: Left vs right associativity is handled by a simple precedence adjustment Easy to extend \u0026amp; readable: Adding new operators requires just updating the precedence table and adding cases. The parsing logic directly reflects the mathematical properties of operators It took me a while to understand this concept fully—most of the implementation closely follows the patterns from the book. At a high level, Pratt Parsing follows high-level recursive algorithm:\nfn parse_expression(\u0026amp;mut self, precedence: Precedence) -\u0026gt; Result\u0026lt;Expression, ParserError\u0026gt; { // Parse the left side (prefix) let mut left_exp = self.parse_prefix_expression()?;\n// Continue parsing infix operators while they have higher precedence while !self.peek_token_is(\u0026amp;Token::Semicolon) \u0026amp;\u0026amp; precedence \u0026lt; self.next_token_precedence() { left_exp = self.parse_infix_expression(left_exp)?; self.next_token(); } Ok(left_exp) }\nLet\u0026rsquo;s trace how this parser handles 2 + 3 * 4 to see the magic in action:\nStart: parse_expression(Precedence::Lowest) Parse prefix: Consumes 2, returns Literal::Integer(2) Check infix: Current token is Plus with precedence 1 ≥ 0, so continue Parse infix: Consumes Plus Calls parse_expression(2) for right side (precedence + 1) Nested call: Parse prefix: Consumes 3, returns Literal::Integer(3) Check infix: Current token is Multiply with precedence 2 ≥ 2, so continue Parse infix: Consumes Multiply, calls parse_expression(3) Parse prefix: Consumes 4, returns Literal::Integer(4) Check infix: EOF has precedence 0 \u0026lt; 3, so stop Returns Expression::Infix with * operator Returns Expression::Infix with + operator The result is the correctly structured AST: +(2, *(3, 4)) which respects operator precedence!\nAlthough tracing through examples made sense during implementation, it didn\u0026rsquo;t offer me an intuitive explanation of how it worked. I found Alex Kladov\u0026rsquo;s explanation to be more understandable. You can think of precedence as \u0026lsquo;binding power\u0026rsquo;. Operatators like * have higher precedence and binding power than + and will hold their operands closer.\nexpr: A + B * C power: 3 3 5 5\nSince we implemented a left to right parser, when you have operators with the same precedence, operands on the right will have slightly higher power\nexpr: A + B + C power: 0 3 3.1 3 3.1 0\nSo now the first operand will group its expression first leading to (A + B) + C.\nMy final implementation:\n#[derive(Debug, PartialEq, PartialOrd)] pub enum Precedence { Lowest, Equals, // == or != LessGreater, // \u0026gt; or \u0026lt; Sum, // + or - Product, // * or / Prefix, Call, }\nimpl Parser { fn parse_expression(\u0026amp;mut self, precedence: Precedence) -\u0026gt; Result\u0026lt;Expression, ParserError\u0026gt; { let mut left_expr = match self.current_token { Token::Ident(ref id) =\u0026gt; Ok(Expression::Ident(id.clone())), Token::Integer(i) =\u0026gt; Ok(Expression::Lit(Literal::Integer(i))), Token::True =\u0026gt; Ok(Expression::Lit(Literal::Boolean(true))), Token::False =\u0026gt; Ok(Expression::Lit(Literal::Boolean(false))), Token::Bang | Token::Dash =\u0026gt; self.parse_prefix_expression(), Token::LParen =\u0026gt; { self.next_token(); let expr = self.parse_expression(Precedence::Lowest); self.expect_peek_token(\u0026amp;Token::RParen)?; expr }, Token::If =\u0026gt; self.parse_if_expression(), Token::Function =\u0026gt; self.parse_fn_expression(), _ =\u0026gt; { return Err(ParserError::new(format!( \u0026ldquo;No prefix parse function for {} is found\u0026rdquo;, self.current_token ))); } };\nwhile !self.peek_token_is(\u0026amp;Token::Semicolon) \u0026amp;\u0026amp; precedence \u0026lt; self.next_token_precedence() { match self.peek_token { Token::Plus | Token::Dash | Token::Asterisk | Token::ForwardSlash | Token::Equal | Token::NotEqual | Token::LessThan | Token::GreaterThan =\u0026gt; { self.next_token(); let expr = left_expr.unwrap(); left_expr = self.parse_infix_expression(expr); } Token::LParen =\u0026gt; { self.next_token(); let expr = left_expr.unwrap(); left_expr = self.parse_fn_call_expression(expr); } _ =\u0026gt; return left_expr, } } left_expr } pub fn next_token_precedence(\u0026amp;self) -\u0026gt; Precedence { match \u0026amp;self.peak_token { Token::Asterisk | Token::ForwardSlash =\u0026gt; Precedence::Product, Token::Plus | Token::Dash =\u0026gt; Precedence::Sum, Token::LessThan | Token::GreaterThan =\u0026gt; Precedence::LessGreater, Token::Equal | Token::NotEqual =\u0026gt; Precedence::Equals, Token::LParen =\u0026gt; Precedence::Call, _ =\u0026gt; Precedence::Lowest, } } }\nAgain, Rust\u0026rsquo;s pattern matching made this implementation clean and easy to write. One of the biggest challenges was managing ownership in the recursive AST structure. Boxing expressions (Box) solved the recursive type issue, and cloning where necessary kept the borrow checker happy while maintaining clean code.\nEvaluation The evaluator is where our Monkey programs will come to life. implemented a tree-walking interpreter, a straightforward approach that traverses the AST and executes code by recursively visiting each node and performing the associated operation. The core concept is beautifully simple. For each AST node, we:\nExamine the node type Recursively evaluate child nodes if needed Perform the operation specific to that node type Return the result Here\u0026rsquo;s the basic algorithm in pseudocode:\nfunction eval(astNode) { if (astNode is integerLiteral) { return astNode.integerValue } else if (astNode is booleanLiteral) { return astNode.booleanValue\n} else if (astNode is infixExpression) { leftEvaluated = eval(astNode.Left) rightEvaluated = eval(astNode.Right) if astNode.Operator == \u0026ldquo;+\u0026rdquo; { return leftEvaluated + rightEvaluated } else if astNode.Operator == \u0026ldquo;-\u0026rdquo; { return leftEvaluated - rightEvaluated } } }\nLet\u0026rsquo;s trace through evaluating (5 + 3) * 2:\nStart: eval(InfixExpression { \u0026ldquo;*\u0026rdquo;, left: InfixExpression { \u0026ldquo;+\u0026rdquo;, 5, 3 }, right: 2 }) Evaluate left operand: eval(InfixExpression { \u0026ldquo;+\u0026rdquo;, 5, 3 }) Evaluate left: eval(5) → Object::Integer(5) Evaluate right: eval(3) → Object::Integer(3) Apply operator: 5 + 3 → Object::Integer(8) Evaluate right operand: eval(2) → Object::Integer(2) Apply main operator: 8 * 2 → Object::Integer(16) The beauty is that each recursive call handles exactly one level of the tree, making the algorithm both simple to understand and implement.\nMy final implementation looked something like:\nfn eval_expression(expr: \u0026amp;Expression, env: \u0026amp;Env) -\u0026gt; Result\u0026lt;Object, EvalError\u0026gt; { match expr { Expression::Ident(id) =\u0026gt; eval_identifier(\u0026amp;id, env), Expression::Lit(lit) =\u0026gt; eval_literal(lit), Expression::Prefix(op, expr) =\u0026gt; { let right = eval_expression(expr, env)?; eval_prefix_expression(op, \u0026amp;right) }, Expression::Infix(op, left, right) =\u0026gt; { let left = eval_expression(left, \u0026amp;Rc::clone(env))?; let right = eval_expression(right, \u0026amp;Rc::clone(env))?; eval_infix_expression(op, \u0026amp;left, \u0026amp;right) }, Expression::If(condition, consequence, alternative) =\u0026gt; { let condition = eval_expression(condition, \u0026amp;Rc::clone(env))?;\nif is_truthy(\u0026amp;condition) { eval_block_statement(consequence, env) } else { match alternative { Some(alt) =\u0026gt; eval_block_statement(alt, env), None =\u0026gt; Ok(Object::Null), } } }, Expression::Function(params, body) =\u0026gt; Ok(Object::Function( params.clone(), body.clone(), Rc::clone(\u0026amp;env), )), Expression::FunctionCall(func, args) =\u0026gt; { let func = eval_expression(func, \u0026amp;Rc::clone(env))?; let args: Result\u0026lt;Vec\u0026lt;Object\u0026gt;, EvalError\u0026gt; = args.iter().map(|arg| eval_expression(arg, env)).collect(); apply_function(\u0026amp;func, \u0026amp;args?) } _ =\u0026gt; Err(EvalError::new(format!( \u0026quot;unknown expression: {}\u0026quot;, expr ))), } }\nThe most challeging portion of implement the evaluation algorithm was handling function calls and closures. This required the introduction of environment management, where I learned a useful patterns for handling shared, mutable state in Rust.\nShared Mutable References\nIn languages with garbage collection, you might simply store a reference to the parent environment. But Rust\u0026rsquo;s ownership system doesn\u0026rsquo;t allow multiple mutable references to the same data. Consider this Monkey code:\nlet makeCounter = fn() { let count = 0; fn() { count = count + 1; count } };\nlet counter1 = makeCounter(); let counter2 = makeCounter();\ncounter1(); // =\u0026gt; 1 counter1(); // =\u0026gt; 2 counter2(); // =\u0026gt; 1 (independent counter)\nEach closure needs to:\nShare the same environment with its parent scope Mutate variables in that shared environment Outlive the function that created it The Rc\u0026lt;RefCell\u0026gt; pattern solves this by combining two powerful Rust concepts. Rc allows shared, immutable access of data to multiple owners to the same data and automatically deallocates when the last reference is dropped. RefCell provides interior mutability (mutation through shared references) and moves borrow checking from compile-time to runtime. Using this pattern, I implemented the environment management, which was essentially a nested hashmaps of each closure\u0026rsquo;s objects:\nuse crate::object::*; use std::collections::HashMap; use std::rc::Rc; use std::cell::RefCell;\npub type Env = Rc\u0026lt;RefCell\u0026gt;;\n#[derive(Debug, Default, Clone)] pub struct Environment { store: HashMap\u0026lt;String, Rc\u0026gt;, outer: Option, }\nimpl Environment { pub fn new_enclosed_environment(outer: \u0026amp;Env) -\u0026gt; Self { let mut env: Environment = Default::default(); env.outer = Some(Rc::clone(outer)); env }\npub fn get(\u0026amp;self, name: \u0026amp;str) -\u0026gt; Option\u0026lt;Rc\u0026lt;Object\u0026gt;\u0026gt; { match self.store.get(name) { Some(obj) =\u0026gt; Some(Rc::clone(obj)), None =\u0026gt; { if let Some(outer) = \u0026amp;self.outer { outer.borrow().get(name) } else { None } } } } pub fn set(\u0026amp;mut self, name: String, val: Rc\u0026lt;Object\u0026gt;) { self.store.insert(name, val); } }\nWhen evaluating a function definition, you would then create a new environment that \u0026ldquo;encloses\u0026rdquo; the current one.\nWhen looking into other ways to solve the problem, I found that this pattern isn\u0026rsquo;t generally recommended since for most common use-cases you often don\u0026rsquo;t need multiple mutable references to the same data. Ultimately though, it seem to be the best alternative:\nArc\u0026lt;Mutex\u0026gt;: Overkill for single-threaded interpreter, and Mutex is for thread safety. Lifetime parameters: Would make the type system extremely complex, and closures need to outlive their creating scope. Conclusion\nImplementing Monkey in Rust was an incredibly rewarding experience that deepened my understanding of both language implementation and Rust\u0026rsquo;s unique features. I loved the simplicity of Rust\u0026rsquo;s match expressions and it\u0026rsquo;s type safety was helpful in implementing error handling and understanding if my implementation was complete. My complete source code is available on GitHub and you can try out my WASM version at https://toil.ing/monkey-lang/\nWhether you\u0026rsquo;re learning about interpreters or exploring Rust\u0026rsquo;s capabilities, I encourage you to dive in and try implementing your own version!\n","date":"27 September 2025","externalUrl":null,"permalink":"/posts/writing-an-intepreter-in-rust/","section":"Posts","summary":"","title":"Writing an Intepreter in Rust","type":"posts"},{"content":"","date":"20 August 2025","externalUrl":null,"permalink":"/categories/art/","section":"Categories","summary":"","title":"Art","type":"categories"},{"content":"","date":"20 August 2025","externalUrl":null,"permalink":"/tags/art/","section":"Tags","summary":"","title":"Art","type":"tags"},{"content":"Inspired by Claude Monet\u0026rsquo;s iconic water lily series, this collection of paintings explores the timeless beauty of lily ponds through the vibrant medium of acrylic. From the soft, dreamy pastels of early morning light to the deeper, more saturated tones of midday sun, each painting offers my attempt at a unique interpretation of Monet\u0026rsquo;s beloved subject matter.\n","date":"20 August 2025","externalUrl":null,"permalink":"/posts/lillies/","section":"Posts","summary":"","title":"Lily Pond Reflections: An Acrylic Homage to Monet","type":"posts"},{"content":"Kazuo Ishiguro\u0026rsquo;s haunting masterpiece \u0026ldquo;Never Let Me Go\u0026rdquo; presents a world that feels both impossibly distant and uncomfortably familiar. What makes Ishiguro\u0026rsquo;s novel so disturbing isn\u0026rsquo;t its science fiction premise, but how it illuminates the mechanisms of dehumanization that operate in our own world. The story\u0026rsquo;s power lies not in shock value, but in its quiet revelation of how we can unknowingly create and maintain systems of oppression through careful conditioning, willful ignorance, and the gradual erosion of empathy.\nThe most unsettling aspect of \u0026ldquo;Never Let Me Go\u0026rdquo; is how completely the students of Hailsham accept their fate. They never rebel, never question the fundamental injustice of their existence. Instead, they focus on smaller concerns: art projects, romantic entanglements, rumors about \u0026ldquo;deferrals\u0026rdquo; that might delay their donations. This acceptance isn\u0026rsquo;t born from ignorance but from a lifetime of careful psychological conditioning that makes resistance seem not just impossible, but unthinkable. This mirrors troubling patterns in our own contemporary society, where marginalized communities often face systemic oppression through institutions designed to appear benevolent. Just as Hailsham\u0026rsquo;s caring guardians and emphasis on creativity masked its true purpose, our own modern systems of control often operate through seemingly positive frameworks - educational program that actually perpetuate inequality, privatized healthcare systems that prioritize profit over patients, or criminal justice reforms that expand surveillance while claiming to promote rehabilitation.\nPerhaps even more chilling than the clones\u0026rsquo; acceptance is the complicity of the \u0026ldquo;normal\u0026rdquo; humans who benefit from their suffering. The original humans in Ishiguro\u0026rsquo;s world don\u0026rsquo;t see themselves as monsters—they\u0026rsquo;ve simply constructed a system where someone else\u0026rsquo;s humanity becomes invisible. The clones aren\u0026rsquo;t quite people on their minds; they\u0026rsquo;re something other, something less, something created specifically to serve. This dynamic resonates powerfully with contemporary political discourse around immigration, poverty, and social welfare. We see similar patterns of dehumanization when certain groups are labeled as \u0026ldquo;drains on society,\u0026rdquo; \u0026ldquo;job stealers,\u0026rdquo; or simply \u0026ldquo;illegal.\u0026rdquo; The language creates psychological distance, making it easier to support policies that cause real human suffering while allowing those who benefit to maintain their moral self-image. Most insidiously, the guardians at Hailsham are presented as reformers who provided their students with a better childhood than clones elsewhere received. Miss Emily speaks proudly of treating the children as individuals, encouraging their creativity, and giving them years of relatively normal life before their donations begin. This \u0026ldquo;progress\u0026rdquo; serves primarily to make the system more palatable to those who operate it, not to fundamentally challenge its premises. The clones are still created to die for others\u0026rsquo; benefit; they\u0026rsquo;re just given art classes first. We see echoes of this in political movements that celebrate incremental reforms while leaving fundamental power structures intact. Corporate diversity initiatives that don\u0026rsquo;t address wage gaps or workplace exploitation. Criminal justice reforms that tweak sentencing guidelines while maintaining a system that disproportionately targets certain communities. Environmental policies that promote individual responsibility while avoiding systemic challenges to industries driving climate change.\nEchoes in \u0026ldquo;Klara and the Sun\u0026rdquo; # Ishiguro returns to similar themes in his later novel \u0026ldquo;Klara and the Sun,\u0026rdquo; in which he explores artificial intelligence through the perspective of Klara, an Artificial Friend designed to serve a sick child named Josie. While the surface narrative differs dramatically from \u0026ldquo;Never Let Me Go,\u0026rdquo; the underlying mechanisms of exploitation remain strikingly similar. Klara, like the clones, accepts her subordinate role completely, viewing her purpose—to serve and eventually be discarded—as natural and even noble. The wealthy families who purchase AFs don\u0026rsquo;t see them as enslaved beings but as sophisticated appliances, much like how the original humans in \u0026ldquo;Never Let Me Go\u0026rdquo; view clones as medical resources rather than people. Both innovations primarily serve to reinforce existing power structures while creating new categories of exploitable beings. The parallel becomes even more chilling when we consider how artificial intelligence is being deployed in our own society—often to automate decisions about employment, lending, policing, and social services in ways that can perpetuate or amplify existing biases while appearing objective and fair.\nWhat makes Ishiguro\u0026rsquo;s dystopian vision so unsettling is not just its content but how it\u0026rsquo;s delivered from the perspective of it\u0026rsquo;s narrators. Ishiguro\u0026rsquo;s narrators don\u0026rsquo;t lie to us—they simply present their realities through the lens of their conditioning, revealing how completely a system can shape not just behavior but perception itself. Kathy tells us her story with the matter-of-fact tone of someone describing a perfectly normal life, using clinical terms like \u0026ldquo;donations\u0026rdquo; and \u0026ldquo;completion\u0026rdquo; to describe organ harvesting and death. This narrative technique mirrors how real-world injustices are often obscured through sanitized language and institutional double-speak. Similarly, in \u0026ldquo;Klara and the Sun,\u0026rdquo; Klara describes her world with childlike wonder and devotion, never questioning why she and her kind are treated as disposable objects. Her love for Josie and gratitude toward the family that purchased her prevent her from recognizing her own exploitation. The most horrifying details emerge not through dramatic revelation but through the accumulation of small, seemingly innocent observations that gradually reveal the true nature of her world. This narrative strategy forces readers to become active participants in uncovering the horror, making us complicit in the same gradual recognition that the characters themselves experience. We, like the narrators, must learn to see past comfortable euphemisms and normalized brutality to understand what\u0026rsquo;s really happening. We see this in how \u0026ldquo;enhanced interrogation\u0026rdquo; replaced \u0026ldquo;torture,\u0026rdquo; how \u0026ldquo;collateral damage\u0026rdquo; describes civilian casualties, how \u0026ldquo;right-sizing\u0026rdquo; means layoffs, and how \u0026ldquo;detention centers\u0026rdquo; house asylum seekers in conditions that would be called imprisonment in any other context.\nIn our current political moment, when democratic institutions face unprecedented pressure and marginalized communities experience increasing threats, Ishiguro\u0026rsquo;s insights feel particularly urgent. The book reminds us that the most dangerous forms of dehumanization often operate through seemingly reasonable justifications, gradual implementation, and the careful management of public consciousness. The novel\u0026rsquo;s ultimate message isn\u0026rsquo;t despair but awareness. By understanding how systems of oppression maintain themselves—through conditioning, complicity, controlled narratives, and false progress—we become better equipped to recognize these patterns in our own world and, hopefully, to resist them.\n","date":"19 June 2025","externalUrl":null,"permalink":"/posts/never-let-me-go/","section":"Posts","summary":"","title":"Book Review: Never Let Me Go - Kazuo Ishiguro","type":"posts"},{"content":"Yu Miri’s Tokyo Ueno Station is a sobering counterpoint to the idyllic vision of Japan we often see presented in the media. Through the eyes of Kazu, a ghost haunting Ueno Park after a life of hardship and homelessness, the novel presents the stark realities of capitalism’s human cost. It reminds us of the struggles of those pushed to society’s margins, and whose existence is often erased in favor of progress and a perception of societal stability.\nAt its heart, Tokyo Ueno Station is a meditation on the unfairness of poverty. The novel highlights how capitalism turns poverty into a ‘sin’—a moral failing rather than a systemic issue. Kazu’s reflection:\n“I thought what a thing of sin poverty was, that there could be nothing more sinful than forcing a small child to lie”\nencapsulates the deep injustice of a world that blames the poor for their suffering rather than the structures that perpetuate it. This sentiment resonated with me deeply living in San Francisco for almost 6 years now, where homelessness and the fentanyl crisis are at the forefront of everyday life. It’s disturbingly easy to avert our gaze, to forget that each homeless person has a story, a tragedy that led them to their current situation. Miri’s novel forces us to confront this forgetting, to recognize the humanity in those whom society has cast aside.\nThe book also critiques Japan’s ideological constructs, particularly kokutai (国体) — the idea of the nation as a unified body, with the emperor as its head and all citizens tied to a common fate. Kazu’s life stands as a testament to the falsity of this notion. Far from being an interconnected system of mutual prosperity, Japan’s national body is riddled with arbitrary disparity and parasitic inequality. His life eerily parallels that of Emperor Akihito, both being born in the same year, yet their fates could not have been more different. While the emperor lived a life of privilege and protection, Kazu endured relentless hardship and displacement. This stark contrast showcases the failure of kokutai, revealing how the ideology masks deep-seated social inequalities rather than uniting the people under a shared destiny. The reality of life for folks like Kazu is that this supposed unity is merely a fantasy— one that conceals the exploitation and suffering that keeps the system running. Tragically though, Kazu can only make sense of his misfortune through the same ideology that has failed him, grasping at sentiments of connection even as he drifts further into oblivion.\nMiri’s novel is both poetic and devastating, a work that lingers long after the final page. It is a call to see—to truly see—those whom society has deemed invisible. Tokyo Ueno Station forces its readers to confront the brutalities of capitalism and the structures that sustain inequality, while reminding us that every person, no matter how lost, carries a story worth remembering.\n","date":"11 February 2025","externalUrl":null,"permalink":"/posts/tokyo-ueno-station-yu-miri/","section":"Posts","summary":"","title":"Book Review: Tokyo Ueno Station - Yu Miri","type":"posts"},{"content":"Haruki Murakami’s latest novel, The City and Its Uncertain Walls, is perhaps his most accessible work to date. While still brimming with his signature blend of magical realism and surreal elements, this novel feels like a more restrained and introspective iteration of the themes he has explored throughout his career. Longtime readers may appreciate its familiar dreamlike quality, but newcomers might find it an easier entry point into Murakami’s world compared to some of his more labyrinthine works like 1Q84 or The Wind-Up Bird Chronicle.\nOne of the most notable aspects of this novel is it\u0026rsquo;s departure from some of Murakami’s historically problematic tropes. His usual objectification of women and fetishization of certain body parts that have been criticized in his previous works have been significantly toned down. While this shift is refreshing, the novel is not without its flaws. The pacing, particularly in the latter half, feels uneven. Parts 2 and 3 seemed hastily constructed, as if they were added as an afterthought rather than as carefully woven parts of the story.\nA prime example of this is the plotline involving the Yellow Submarine Boy. While intriguing at first, his role ultimately feels more like a convenient plot device than a meaningful addition to the narrative. His inclusion seemed to be primarily a mechanism to guide the protagonist back into the walled city, but beyond that, his significance feels shallow. This jarring transition is especially frustrating because it follows Koyasu’s deeply emotional backstory, in which we learn about the tragic loss of his family. The emotional weight of that revelation is undercut by this abrupt shift in focus, leaving the reader feeling disconnected.\nComparing this novel to Hard-Boiled Wonderland and the End of the World, it’s clear that Murakami has taken a different approach to his dual-world storytelling. In Hard-Boiled Wonderland, the alternating narrators and converging plotlines provided a compelling structure that kept the reader engaged while allowing each narrative arc to develop its own momentum. The City and Its Uncertain Walls, in contrast, feels more fragmented, with less cohesion between its narrative strands. The result is a novel that, while thematically rich, sometimes struggles to maintain its pacing and focus.\nAt its core, however, this novel appears to be a reflection on modern societal divisions. Murakami seems to have taken his original walled city short story and expanded it into an allegory for the increasingly insular and divided world we live in today. The protagonist’s journey mirrors the psychological and emotional responses many of us experienced during the COVID-19 pandemic and the rise of political populism—tempted by the comfort of self-isolation yet ultimately recognizing its unsustainability.\nBy the novel’s conclusion, Murakami delivers a poignant message: escapism, no matter how alluring, is only a temporary refuge. True healing and growth require facing reality, processing emotions, and forging genuine connections with the world and the people around us. While The City and Its Uncertain Walls may not reach the heights of Murakami’s best works, it remains a thought-provoking exploration of solitude, memory, and the importance of human connection.\n","date":"4 February 2025","externalUrl":null,"permalink":"/posts/the-city-and-its-uncertain-walls-review/","section":"Posts","summary":"","title":"Book Review: The City and Its Uncertain Walls - Haruki Murakami","type":"posts"},{"content":"","date":"27 December 2024","externalUrl":null,"permalink":"/categories/cooking/","section":"Categories","summary":"","title":"Cooking","type":"categories"},{"content":"","date":"27 December 2024","externalUrl":null,"permalink":"/tags/cooking/","section":"Tags","summary":"","title":"Cooking","type":"tags"},{"content":"","date":"27 December 2024","externalUrl":null,"permalink":"/categories/sushi/","section":"Categories","summary":"","title":"Sushi","type":"categories"},{"content":"","date":"27 December 2024","externalUrl":null,"permalink":"/tags/sushi/","section":"Tags","summary":"","title":"Sushi","type":"tags"},{"content":"I\u0026rsquo;ve been obsessing about sushi for almost 2 years now and I\u0026rsquo;ve learned alot on this weird rabbit-holing journey. I\u0026rsquo;ve gotten more familiar with a whole variety of fish and different cooking techniques, especially an appreciation of the precision and skil chef\u0026rsquo;s exhibit when they handle their knives. Preparing a saku block and slicing the perfect piece for nigiri still gives me nightmares. I\u0026rsquo;ve definetly gotten faster at making rolls, but I think I\u0026rsquo;ve hit a wall where I probably need to significant dedicated block of dedicated practice to get any better. I\u0026rsquo;ll probably take a break on this next year and focus on other cuisines and my other hobbies.\nMay 10 - Salmon Nigiri Jul 9 - Sushi Class in Tokyo Jul 28 - Pokemon Sushi [GHOST_URL/pokemon-sushi/] Jul 28 - Tamago \u0026amp; Shrimp Aug 4 - Mosaic (Miso Salmon \u0026amp; Tamago) Aug 24 - Torched \u0026amp; Pressed (Avocado, Miso Truffle Glaze) Nov 6th - Salmon Nigiri for Dad\u0026rsquo;s Birthday Nov 28th - Handrolls Nov 30th - Mosaic, Salmon and Negitoro Dec 16th - Tamago Sushi for Holiday Potluck\n","date":"27 December 2024","externalUrl":null,"permalink":"/posts/sushi-making-progression-2024/","section":"Posts","summary":"","title":"Sushi Making Progression 2024","type":"posts"},{"content":"","date":"30 July 2024","externalUrl":null,"permalink":"/categories/food/","section":"Categories","summary":"","title":"Food","type":"categories"},{"content":"","date":"30 July 2024","externalUrl":null,"permalink":"/tags/food/","section":"Tags","summary":"","title":"Food","type":"tags"},{"content":"Tatsugiri seems to be based on sushi, specifically the nigirizushi (a type of sushi shaped by hands, instead of rolled): Curly Form resembles a shrimp nigiri, Droopy Form resembles a tuna nigiri, and Stretchy Form resembles tamago sushi\n","date":"30 July 2024","externalUrl":null,"permalink":"/posts/pokemon-sushi/","section":"Posts","summary":"","title":"Pokemon Sushi","type":"posts"},{"content":"I was lucky enough to snag a reservation at Sushiya Shota (すし家 祥太) during my last week in Tokyo. Shota-san is a South Korean-born chef who previously worked at Sushi Kanesaka with famed sushi chef Koji Saito. Chef Shota served us an exquisite meal of 3 appetizers and 16 different pieces/dishes. After sampling so many different places in Japan, I felt like I could tell that Shota-san\u0026rsquo;s technique and presentation was on another level. His pieces were innovative, with my first foray into steamed sushi?! Other highlights for me were: the most perfect piece of Aji and an amazing cut of katsuoThe final course breakdown:\n1/ Salad of heirloom tomatoes, fruit jelly and flowers\n2/ Simmered Octopus - most tender piece of octopus I\u0026rsquo;ve ever had\n3/ Snow Hair Crab on the shell\n4/ Halibut\n5/ Isaki (伊佐木/Chicken Grunt)\n6/ Miso-marinated Sunfish (a smaller species but I didn\u0026rsquo;t catch the Japanese name)\n7/ Kinmedai (金目鯛 / Splendid alfonsino)\n8/ Blue Fin Akami - Suprisingly buttery\n9/ Maguro Collar\n10/ Kohada (コハダ/Gizzard shad)\n11/ Tiger Prawn with minced head\n12/ Squid with ink salt - best texture I\u0026rsquo;ve tried so far. more chew but still melt in your mouth\n13/ Aji (鯵/Horse Mackerel) with seaweed salt - Hard to describe, but the salt complemented and brought out the flavours of the mackrel incredibly. Highest quality mackerel I\u0026rsquo;ve ever had\n14/ Katsuo (鰹 / Bonito)\n15/ Ishigakigai (石垣貝 / Castle Stone Clam)\n16/ Steamed Sea Bass - my inner canto was screaming with joy. So inventive!\n17/ Murasaki Uni - creamy but light\n18/ Maki Rolls - monkfish liver, eel with cucumber and kanpyo/dried gourd\n19/ Tamagoyaki\nTablelog: 3.75Price: $185 USD with alcohol\nThis is probably my new favourite sushi omakase experience. I\u0026rsquo;ll need to brush up on my Japanese and try to come back next year!\n","date":"22 July 2024","externalUrl":null,"permalink":"/posts/sushiya-shota/","section":"Posts","summary":"","title":"Sushiya Shota (すし家 祥太)","type":"posts"},{"content":"Sushi Akazu is a high-end chain restaraunt specializing in Edo-style sushi made with \u0026lsquo;Akazu\u0026rsquo; Red Vinegar. I had a chance to visit their location in Roppongi and had one of my more memorable omakase experiences. We got an early reservation and were suprisingly the only two patrons for their 5:30pm seating. Dinner ended up being around 22 pieces including appetizers and dessert.\nClam Broth Kinmedai/Golden Eye Snapper Sashimi - Delicate and somewhat crisp texture and mild umami flavour. Fresh wasabi was fragrant and not overpowering Hokkigai/surf clam Sashimi - Didn\u0026rsquo;t even realize it was surf clam since the texture was massively different from the frozen variants you\u0026rsquo;re used to eating in North America. Mildly sweet with the saltiness of the soy-sauce marinade made for an amazing bite Chutoro - Great cut of tuna. The cross-hatching from the chef made it instantly melt in your mouth. This was also my first taste of the red-vinegar sushi rice which was highlighted througout the meal Kohada/Gizzard Shad - Served sandwiched with picked vegetables. One of my favourite fishes. Perfectly briny and slightly sweet. The vegetables hightened the flavours while adding some textural crunch Ika/Squid and Caviar - Watching the chef prepare this is always a pleasure. The squid was cut so finely it melts in your mouth into the most unctuous savoury pudding like bite. The caviar added a bit of umami to this piece, but honestly I think was lost in the flavour of the squid Grilled Sea Bream with Scales - Amazingly grilled fish seemed to be the theme of this night. Having grilled scales was definetly a first for me. The chip-like crunch of the scales were a textural delight, while the sea bream meat itself still remained light fatty and flaky. The lime and salt made it feel like you were eating the best fried whitefish of your life sandwiched with some chips embedded in the meat! Ebi - biggest prawn I\u0026rsquo;ve ever had Abalone two ways: Sashimi and cooked in abalone liver sauce. Raw abalone was nothing super special, although I\u0026rsquo;m not the biggest fan of any type of abalone. The liver sauce was definetly the highlight. Never had something so rich, unqtuous and full of umami. To finish the sauce we added rice to make the most amazing makeshift risotto. Definetly stealing this recipe for my own tasting menu\nSeaweed Salad - briny and pickley good palate cleanser\nSand Borer - first time having this type of fish, light savoury flavour and very delicate flesh Salmon - Great knifework. Soft and melted in your moth Grilled Tairagi Handroll - Tasted like a more firm scallop. Chef\u0026rsquo;s salt and spicy seasoning enhanced the bite Horse Mackerel - pretty good, not the best mackerel I\u0026rsquo;ve had Uni \u0026amp; Chawanmushi - pretty good. Uni wasn\u0026rsquo;t the best Akami - amazing knife work and delicate work with the quick soy marinade Nodoguro Handroll - very fatty whitefish. felt like you\u0026rsquo;re almost eating a burrito with the amount of fat the oozed out Salmon Roe, Snow Crab, Ikura - Mini-Kaisen don. Very delicate flavours. I think this could have used more soy or the ikura could have been saltier Uni - Amazing bite, bordering on execesive. The chef really heaped as much uni as he could as we approached the end of the meal Minced Toro - Keeping with the theme of excess. Giant roll of toro with minced takuan Eel - Cooked to perfection with the most amazing unagi sauce. Felt like you were eating custard Finish with Tamago and Match Pudding for dessert Sushi Akazu was definetly one of my most memorable sushi experiences. The sheer variety of fish is something you could only experience in Japan. I also really loved the use of red shari, giving the sushi rice a deeper aroma. I thought there was a bit of lack of textural variety with the pieces, but it was more than compensated for by the variety of different pieces. This meal was one of the rare times where I felt like I ate too much\n","date":"14 July 2024","externalUrl":null,"permalink":"/posts/sushi-akazu/","section":"Posts","summary":"","title":"Sushi Akazu (寿司赤酢)","type":"posts"},{"content":"","date":"3 June 2024","externalUrl":null,"permalink":"/categories/nolink/","section":"Categories","summary":"","title":"Nolink","type":"categories"},{"content":"","date":"3 June 2024","externalUrl":null,"permalink":"/tags/nolink/","section":"Tags","summary":"","title":"Nolink","type":"tags"},{"content":"","date":"3 June 2024","externalUrl":null,"permalink":"/categories/notitle/","section":"Categories","summary":"","title":"Notitle","type":"categories"},{"content":"","date":"3 June 2024","externalUrl":null,"permalink":"/tags/notitle/","section":"Tags","summary":"","title":"Notitle","type":"tags"},{"content":"","date":"3 June 2024","externalUrl":null,"permalink":"/posts/rice-cooker-chicken-rice/","section":"Posts","summary":"","title":"Rice Cooker Chicken \u0026 Rice","type":"posts"},{"content":"","date":"4 May 2024","externalUrl":null,"permalink":"/categories/algorithms/","section":"Categories","summary":"","title":"Algorithms","type":"categories"},{"content":"","date":"4 May 2024","externalUrl":null,"permalink":"/tags/algorithms/","section":"Tags","summary":"","title":"Algorithms","type":"tags"},{"content":"This will likely be the final course you\u0026rsquo;ll take in your OMSCS journey. It\u0026rsquo;s a pre-requiste to gradate for all specializations and at least in 2023 you were most likely unable to register for the class until your final semester (unless you were very lucky and go an early waitlist position). If you search for course reviews online, you\u0026rsquo;ll find that this course has built a reputation over the years for being difficult. While the material is moderately challenging, I found that it was a unique combination of the course\u0026rsquo;s logistics and grading mechanics along with it being a graudation requirement that adds a level of stress and anxiety that doesn\u0026rsquo;t exist for the other courses.\nFor my semester the course was broken down to the following deliverables:\nHomework: 14%. Coding projects: 7% Mini-Quizzes: 7%. Logistic quizzes: 3% Exams: 69% (3 total) Based on the breakdown, you can see that the homework, projects and quizzes are mostly there as a preparation tool and forcing function to study for 3 seperate exams/midterms, which makes up the bulk of your grade. During my semester the breakdown was:\nExam 1: Dynamic Programming, Divide \u0026amp; Conquer, FFTs Exam 2: Graph Theory: Strongly Connected Components, Minimum Spanning Trees, Max Flow. Modular Math \u0026amp; RSA Exam 3: NP Completeness, Linear Programming, Halting Problem Each exam usually consisted of 2 short answer problems and a section of multiple choice questions. What most students (myself included) struggled with is the particular requirements for answering each short answer question. Due to the particular nature of this course, the instructors will require your short answers to follow a specific format, which are provided for you in your homework, quizzes and regularily reviewed at Office Hours and on Ed posts. The formatting requirements ensures that we\u0026rsquo;re demonstrating a deep understanding of the material and ensures we can be evaluated fairly in an online setting. But in my opinion the strange and sometime obtuse requiements also relaxes the level of programmatic and mathematical rigour from equivalent graduate level algorithms course that I\u0026rsquo;ve takin in the past. Seeing other iterations of this course from other universities like MIT, Berkley and UofT (we used the same textbook as my undergrad course ECE-358H1!) I think the course could benefit from actually increasing the level of difficulty of the questions but increase the duration of the exams or make the exams open book.\nCurriculum:\nDynamic Programming - The bane of any technical programming interview . DP is an optimization technique to breaking down problems into smaller subproblems and efficiently storing solutions to avoid redundant computations. We cover the typical problems like longest common subsequence, knapsack, shortest path etc\u0026hellip; Dynamic Programming was also the only type of problem where we could solve the problem with pseudocode. Divide \u0026amp; Conquer Algorithms - Break your problem into smaller, more manageable subproblems, solving them recursively, and combining their solutions to derive the final result. We review classic algorithms like mergesort, quicksort then cover advanced techniques such as Strassen\u0026rsquo;s algorithm for matrix multiplication, median of medians Fast Fourier Transform - We review the basic components of how the FFT works as a divide and conquer algorithm. This section was a highlight for me personally as it made me have a deeper appreciation for the algorithm itself and it\u0026rsquo;s use for fast multiplications Graph Theory - Strongly Connected Components - Students delve into algorithms for identifying strongly connected components within directed graphs, gaining insights into their applications in network analysis, social network modeling, and compiler optimization. Graph Theory - Minimum Spanning Trees - MSTs represent a cornerstone of graph theory, offering elegant solutions to connectivity problems in network optimization. Through algorithms like Kruskal\u0026rsquo;s and Prim\u0026rsquo;s, students learn to construct minimum spanning trees efficiently, unraveling their applications in network design, clustering analysis, and routing algorithms. Graph Theory - Maximum Flow - Maximum flow algorithms play a pivotal role in modeling and optimizing flow networks. Students explore algorithms like Ford-Fulkerson and Edmonds-Karp and derive the min cut/max flow theorem. Modular Math \u0026amp; RSA Encryption - This was a particularily interesting section for me since it dealt with number theory and primes. We dove into primality testing, Euler\u0026rsquo;s theorem and work through the RSA encryption algorithm itself, gaining a deep understanding of it\u0026rsquo;s role in cryptography. NP Completeness and Reductions - The concept of NP completeness serves as a cornerstone of computational complexity theory, offering insights into the inherent difficulty of solving certain decision problems. Through reductions and complexity analysis, students explore the intricacies of NP-complete problems, gaining a profound appreciation for the limits of efficient algorithmic solutions. We review the basic Karp NP Complete Problems like: SAT, 3SAT, Clique, Vertex Cover, Knapsack etc\u0026hellip; Linear Programming - Linear programming provides a powerful framework for optimizing objective functions subject to linear constraints, with applications spanning operations research, economics, and engineering. Students learn the simplex method and interior point methods, mastering techniques for solving linear programming problems and unlocking their potential in resource allocation, production planning, and decision-making. I\u0026rsquo;m really glad that I was able to take this course and it served as a fitting end to my OMSCS journey. Algorithms serve as the core foundation of computing and every domain ranging from machine learning to compiler design. By appreciating the role of algorithms and complexity theory, we gain insight into the fundamental mathematical realities that power and limit the applications we rely on daily, and allow us to build even more powerful tools for everyone.\n","date":"4 May 2024","externalUrl":null,"permalink":"/posts/cs-650-graduate-algorithms/","section":"Posts","summary":"","title":"CS-6515 Graduate Algorithms","type":"posts"},{"content":"","date":"4 May 2024","externalUrl":null,"permalink":"/categories/dynamic-programming/","section":"Categories","summary":"","title":"Dynamic Programming","type":"categories"},{"content":"","date":"4 May 2024","externalUrl":null,"permalink":"/tags/dynamic-programming/","section":"Tags","summary":"","title":"Dynamic Programming","type":"tags"},{"content":"","date":"4 May 2024","externalUrl":null,"permalink":"/categories/omscs/","section":"Categories","summary":"","title":"Omscs","type":"categories"},{"content":"","date":"4 May 2024","externalUrl":null,"permalink":"/tags/omscs/","section":"Tags","summary":"","title":"Omscs","type":"tags"},{"content":"","date":"4 May 2024","externalUrl":null,"permalink":"/categories/recursion/","section":"Categories","summary":"","title":"Recursion","type":"categories"},{"content":"","date":"4 May 2024","externalUrl":null,"permalink":"/tags/recursion/","section":"Tags","summary":"","title":"Recursion","type":"tags"},{"content":"Welcome fellow food enthusiasts! Today, I\u0026rsquo;m thrilled to recount my extraordinary dining experience at Yuji, a hidden gem nestled in the heart of the Japantown that specializes in Kappo Ryouri - the japanese art of cutting and cooking, with the focus on seasonal ingredients.\nAnkimo tofu, Eggplant with Uni, Simmered Bamboo Shoots and Wakame, Firefly Squid with Mustard Miso Vinegar, Omlette with Caviar Our evening commenced with a tantalizing array of appetizers, each one a masterpiece in its own right. The Ankimo tofu, delicately crafted from monkfish liver, melted on the palate with its creamy texture, while the Eggplant with Uni offered a delightful fusion of flavors, enhanced by the richness of sea urchin. Simmered Bamboo Shoots and Wakame provided a refreshing contrast, perfectly complemented by the tangy Firefly Squid with Mustard Miso Vinegar. The Omlette with Caviar, a symphony of indulgence, left us yearning for more.\nDobin Mushi Next up was the Dobin Mushi, a traditional Japanese broth served in a teapot. Infused with fragrant dashi and adorned with tender morsels of seafood and mushrooms, each sip transported us to culinary bliss.\nSashimi - Otoro, Kinmeda, Squid The Sashimi dish showcased the freshest catch of the day, featuring luscious slices of Otoro, Kinmeda, and Squid. Each bite was a revelation, highlighting the purity of the ingredients and the skill of the chef.\nGrilled Chillean Sea Bass w/ Yuzu Kosho Miso Sauce For our main courses, we savored the Grilled Chilean Sea Bass with Yuzu Kosho Miso Sauce, a harmonious blend of smoky flavors and citrusy notes.\nMadai Shabu Shabu The Madai Shabu Shabu, allowed us to cook tender slices of sea bream at our table with the cutest candlelit broth. Definitely the most unique dish of the night.\nChawanmushi with Ikura The Chawanmushi with Ikura was a delicate custard infused with the essence of dashi, topped with plump salmon roe that burst with briny goodness.\nA5 Wagyu Steak! And who could forget the pièce de résistance – the A5 Wagyu Steak! Each bite of this marbled masterpiece melted in the mouth, leaving a lingering sensation of unparalleled luxury.\nDeep Fried Prawns Stuff with Gingko Cake \u0026amp; Okra Crab Porridge with Truffle! The Deep Fried Prawns stuffed with Gingko Cake offered a delightful contrast of textures, with the crispy exterior giving way to a succulent filling. And the Crab Porridge with Truffle elevated comfort food to new heights, with the earthy aroma of truffle permeating every spoonful.\nMatcha Creme Brulee To conclude our epicurean journey, we indulged in the Matcha Crème Brûlée, a sublime marriage of creamy custard and bitter-sweet matcha, perfectly caramelized to create a crispy topping.\n","date":"30 March 2024","externalUrl":null,"permalink":"/posts/yuji/","section":"Posts","summary":"","title":"YUJI","type":"posts"},{"content":"Valentine\u0026rsquo;s Day\n","date":"15 February 2024","externalUrl":null,"permalink":"/posts/surf-turf-with-fondant-potatoes/","section":"Posts","summary":"","title":"Surf \u0026 Turf with Fondant Potatoes","type":"posts"},{"content":"CS-7650 is the newest machine learning OMSCS course that delves into the intricacies of Natural Language Processing, offering a comprehensive exploration of both foundational concepts and contemporary techniques and a history of how we arrived at Large Language Models.\nI was lucky enough enroll in it\u0026rsquo;s 2nd iteration for Fall 2023. At the time of writing, the course consists of 6 coding assignments, 6 quizzes, a final project and two exams. The assignments were pretty standard faire and should be pretty straight forward if you\u0026rsquo;re well-versed in PyTorch. I particularily enjoyed the exam structure which was both open book and we were given almost a week to complete our writeups before submitting. I much prefer this structure which better tests your knowledge rather than the more traditional closed book time-limited exam formats that value rote memorization, anxiety management and reading comprehension more than anything else.\nCurriculum:\nFoundational Concepts: CS-7650 begins with a solid foundation in neural network basics, ensuring students are well-equipped with the fundamental knowledge required for more advanced topics. Concepts such as tokenization, part-of-speech tagging, and syntactic analysis are covered comprehensively. Text Classification: Moving into the realm of natural language processing, the course transitions to text classification, exploring techniques for classifying text using both traditional regression approaches and basic neural network models. Recurrent Neural Networks (RNNs): RNNs, a crucial component in NLP, are extensively covered. The course delves into the architecture of RNNs, LSTMs and Seq2Seq models and their ability to handle sequential data, and applications like language modeling. Distributional Semantics: The course explores distributional semantics, focusing on representing the meaning of words based on context. Topics include word embeddings, semantic similarity, and methods like Word2Vec and GloVe. Transformers: The revolutionary transformers take center stage in this section, with an in-depth exploration of attention mechanisms, transformer architecture, and their applications in tasks like sequence-to-sequence models and language understanding. Machine Translation: The classic problem of machine translation is addressed in the context of modern techniques and models. Approaches to machine translation, neural machine translation, and the application of attention mechanisms are covered. Current State-of-the-Art NLP Techniques (Meta AI): One of the highlights of CS-7650 is the expertise brought to the table by the state of the art researchers in the field today. Several Facebook research at FAIR present their current research in Question Answering, Text Summarization, Privacy Preservation and Responsible AI. Key-Value Memory Networks # My biggest learning from this course was really appreciating how the current state of the art in LLMs with transformers came from an interative evolution of neural architectures from the NLP research community. For the final project, we were challenged to design and optimize a key-value based memory network. It was both interesting to see how we could implement such a basic concept from computer science within a differentiable neural network, and insightful to see how this concept of memory would eventually evolve into attention mechanisms in later state of the art architectures. This was one of the more challenging assignments I had to complete within OMSCS, ranking up there with training MARL agents in Reinforcement Learning.\nConclusion\nCS-7650 has been one of the better courses I\u0026rsquo;ve taken in OMSCS. The lectures were done very well and the subject matter was very relevant given the recent surge in popularity around LLMs and NLP. I did kind of wish that the course covered more recent advances in the field like RLHF, but it was still a great foundational introduction to the field.\n","date":"13 February 2024","externalUrl":null,"permalink":"/posts/cs-7650-natural-language-processing/","section":"Posts","summary":"","title":"CS-7650 Natural Language Processing","type":"posts"},{"content":" I tried using dots to create pointillist effect to create a feeling of restlessness and sense of being unmoored. The contrast between the vibrant blue circle and the dark center represents my struggle between happiness and sadness.\n","date":"9 January 2024","externalUrl":null,"permalink":"/posts/feeling-blue/","section":"Posts","summary":"","title":"feeling blue","type":"posts"},{"content":"","date":"1 January 2024","externalUrl":null,"permalink":"/categories/michelin/","section":"Categories","summary":"","title":"Michelin","type":"categories"},{"content":"","date":"1 January 2024","externalUrl":null,"permalink":"/tags/michelin/","section":"Tags","summary":"","title":"Michelin","type":"tags"},{"content":" January - Empress By Boon # ‌ Nestled within Empress Boon lies a culinary treasure: Uni \u0026amp; Sardine Fried Rice, a standout amidst a prix fixe menu that oscillates between hits and misses. The rice was skillfully infused with the briny essence of the sardines, creaminess of the uni an impressive sense of 鑊氣 that evokes the essence of authentic Cantonese cuisine.\nMoreover, the ambiance at Empress Boon is nothing short of captivating. The restaurant boasts breathtaking views of Chinatown that served as a picturesque backdrop to our Chinese New Year meal. One also can\u0026rsquo;t help but be enchanted by the warm welcome and attentive service.\nFebruary - 金蓬萊 Golden Formosa # **金蓬萊 (**Golden Formosa), is a 1-Michelin star restaraunt tucked away in a quite neighourhood in the Shilin District. The restaurant effortlessly marries tradition with innovation, showcasing dishes that are a symphony of flavors and textures.\nThe 蓬萊排骨酥 (Crispy Pork Ribs) set the tone for our extravagant lunch, presenting a harmonious blend of crispy exterior and tender, succulent meat. Each bite was a burst of savory delight, perfectly balanced with aromatic spices that lingered on the palate.\nThe 乾拌古早味蚵仔麵線 (Oyster Noodles), a lesser-known delight on the menu, surprised and delighted with its simplicity yet depth of flavor. The noodles, perfectly cooked, were bathed in a savory sauce that carried the essence of fresh, plump oysters. Each slurp was a celebration of the sea, as the delicate yet distinct flavor of the oysters intertwined flawlessly with the umami-rich sauce.\nThe 烏魚子炒飯 (Prime Mullet Roe Fried Rice) was a revelation—a testament to the chef\u0026rsquo;s artistry. The rice, delicately infused with the essence of prime mullet roe, offered a medley of umami flavors that danced on the taste buds. The subtle yet distinct seafood notes elevated the dish to a level of unparalleled indulgence.\nHowever, the pièce de résistance was the 佛跳牆 (Buddha Jumps Over the Wall)—a culinary masterpiece that surpassed all expectations. This traditional Taiwanese delicacy was a complex symphony of premium ingredients, each contributing its unique essence. I\u0026rsquo;m not usually a fan of taro and traditional Chinese dried seafood, but the rich, flavorful broth enveloped a luxurious assortment of seafood and meats, creating a range of flavors and textures that was nothing short of extraordinary. I\u0026rsquo;ve never had soup and taro that was so luxurious and velvety!\nThe attentive service and inviting ambiance further enhanced the overall dining experience, creating a memorable afternoon that celebrated the diverse and exquisite flavors of Taiwanese gastronomy.\nMarch - House of Prime Rib # House of Prime Rib sets the standard for quality dining in San Francisco. The moment you walk in, you\u0026rsquo;re greeted by an festive energy that exudes warmth and tradition.\nThe portions are massive, and the prime rib, is always cooked to a perfect medium rare. Juicy, tender, and precisely prepared, it\u0026rsquo;s the kind of dish that beckons for seconds, even if you\u0026rsquo;re struggling not to overeat. The quality is consistent, making each visit just as exceptional as the last.\nDon\u0026rsquo;t underestimate the sides here—they\u0026rsquo;re as remarkable as the main attraction. And here\u0026rsquo;s a tip: don\u0026rsquo;t sleep on the creamed spinach! These seemingly humble sides are always a delightful surprise, adding layers of flavor that complement the star dish beautifully.\nHouse of Prime Rib isn\u0026rsquo;t just a meal; it\u0026rsquo;s an experience. It\u0026rsquo;s the kind of place where you\u0026rsquo;ll find yourself eagerly planning your next visit while still savoring the remnants of your current feast. If you\u0026rsquo;re in San Francisco, this spot is an absolute must-try for a classic, indulgent dining experience.\nApril - Rintaro # Rintaro is probably one of my favourite restaraunts in SF. We came by on a random weekday night and opted for their set menu, but I\u0026rsquo;d higly recommend going a la carte and ordering to your heart\u0026rsquo;s desire.\nWe started with the Kani Dashimaki Tamago, a fluffy, light folded omlette blend with local San Francisco Dungeness Crab was a perfect started and really highlighted Rintaro\u0026rsquo;s vision of Northern Californian and Japanese cuisine.\nNext we had San Ten Mori, which was several pieces of sashimi showcased high-quality ingredients. The San Diego Bluefin tuna was a highlight, but we didn\u0026rsquo;t think it was anything particularily mindblowing or adventurous.\nThe Tsukene (minced chicken skewers) at Rintaro emerges as a true highlight, capturing the essence of perfectly grilled skewers. Each bite was a testament to the chef\u0026rsquo;s mastery, offering a symphony of flavors that tantalize the taste buds.\nThe Chizu Tori Katsu, a fried delight, impressed with its impeccable preparation. Notably light and devoid of excess oil, it retains a crispy texture while allowing the chicken to shine. The accompanying katsu sauce added an extra layer of savory magic.\nEnding on a unique note, the Hojicha Panna Cotta might not have been a personal favorite, but its distinctiveness cannot be overlooked. It\u0026rsquo;s unique toasted tea notes in the syrup added a touch of adventure to the whole dining experience and was a great end to a wonderful meal.\nMay - 景成 - City View Restaurant # Regarded by many as the pinnacle of dim sum in San Francisco, City View\u0026rsquo;s classics set the standard for what good Cantonese food should taste like. From the impeccably crafted 蝦餃 (Shrimp Dumplings) to the flavorful 糯米雞 (Stuffed Sticky Rice in Bamboo Leaves), each dish carries the hallmark of expert craftsmanship and attention to detail.\nHowever, what truly steals the show is their XO醬炒腸粉 (Fried rice rolls with XO sauce). A rarity to find executed at such a high standard, this dish is a testament to City View\u0026rsquo;s dedication to authenticity and innovation. The 腸粉 was expertly wok-fried, achieving a textural marvel—bouncy and QQ, with a delightful crunch on the exterior. The marriage of flavors between the XO sauce and the delicate rice rolls was a symphony of tastes that\u0026rsquo;s hard to forget.\nJune - Noodle in a Haystack # Noodle in a Haystack has the hardest to get reservation in the city for good reason. You can feel Clint and Yoko\u0026rsquo;s dedication to their craft through the attention to smallets of details in each on of their dishes. Seating is very intimate, with an L-shaped bar surrounding the prep area. We opted for the sake pairing which came with 8 different dishes.\nOur soirée began with a Financier adorned with Caviar. The gentle sweetness of smoked shoyu harmonized with an exquisite touch reminiscent of a sophisticated lox bagel, a whimsical yet refined appetizer.\nNext was the Chawanmushi unveiled itself with an audacious twist. Chicken intertwined with the nuanced depths of dashi-infused egg and seaweed. The XO sauce played mischievously, adding textures and layers that challenged the norms of this classic dish.\nEnter the Cold Tomato and Uni Ramen—a delicate dance of flavors. The sundried tomatoes lent a surprising depth to the broth, while the velvety richness of uni bestowed an opulence that resonated with each spoonful, crafting a symphony of sensation.\nBluefin Tuna and Arugula Salad - meticulously selected, was a testament to the restaurant\u0026rsquo;s uncompromising commitment to quality\nThe A5 Wagyu Beef and Curry arrived, accompanied by ethereal fried milk bread—each bite a sublime exploration. The beef melted like poetry, while the curry caressed the senses, culminating in a crescendo of flavor and tenderness. The dish was exteremley playful and hit a nostalgiac note for me, reminding me of the best parts of Japanese comfort food.\nThe Yuzu Daikon Pickles offered a palate-cleansing interlude—a clean, citrusy burst that revitalized the senses, leaving a trail of zesty elegance. However, it was the humble cucumbers that stole the spotlight—a seemingly unassuming creation transformed into a mesmerizing delicacy. The balance of salt, sugar, and shio konbu created a harmonious dance on the palate, leaving an enduring impression.\nLastly, the Shio Butter, Corn, Whelk and Clam Ramen—an opus of depth and complexity that rewrote the boundaries of noodle artistry. As the konbu butter melts into the clam broth, the ramen transforms into the most deeply flavourful seafood broth. The whelk and corn provide a great textural contrast to the amazingly toothsome noodles and chashu. This was quit possibly the best single bowl of ramen I\u0026rsquo;ve ever had the priveledge to try.\nDessert was a combination of shaved yuzu ice and burnt basque cheescake. I\u0026rsquo;m not much of a dessert person, but both were a satisfying way to end an exquisite meal.\nNoodle in a Haystack transcends a mere dining experience—it\u0026rsquo;s an immersive tapestry of flavors, textures, and narratives. Each dish is a chapter in a story, orchestrated by a chef\u0026rsquo;s genius and enriched by hosts who transform a meal into an unforgettable saga. A reservation here isn\u0026rsquo;t just access; it\u0026rsquo;s an entrée into the extraordinary.\nJuly - Llama San # Llama San isn\u0026rsquo;t just a restaurant; it\u0026rsquo;s a collision of Japanese and Peruvian cuisine that beckons the palate on an exhilarating journey. I was able to snag a seat in July at the bar for a quick dinner. They offer a prix fixe menu but I opted to order a la carte.\nMarasheen Oysters, corn cream, grilled baby corn and papa sec. The grilled corn added a textural complexity to the dish and complimented the brininess of the oysters perfectly.\nThe Mackerel Ceviche is like a canvas painted with Peruvian zest—a vibrant melody of freshness and tanginess that sparks an instant connection with your taste buds.\nIberico Pork Tonkatsu, Udon Verde \u0026amp; Tsukemono Cucumber - This dish was the undisputed star of the night. This Katsu, an epitome of culinary brilliance, seduces with tenderness and an explosion of flavors. The Udon Verde was a creamy, flavor-packed symphony with a nuanced peppery kick. It\u0026rsquo;s like a fusion of the familiar and the unexpected, an intriguing dance of taste and texture that marries beautifully with the standout amazingly fried iberico pork.\nLlama San isn\u0026rsquo;t just about food; it\u0026rsquo;s a celebration of innovative fusion that bridges continents. It\u0026rsquo;s where Peruvian vibrancy meets Japanese finesse, inviting your palate on an uncharted voyage through a world of extraordinary flavors.\nAugust - The Anchovy Bar # If you\u0026rsquo;re going to the The Anchovy Bar, you definitely have to try their most popular dish - Anchovy toast. The anchovies, carefully arranged atop the bread, unveil a tapestry of briny richness that dances across the palate. Each bite, a delicate interplay of umami, harmonizes with a subtle olive oil drizzle, adding depth without overwhelming the senses.\nThe Anchovy Bar proves that with a focus on sourcing the highest quality local ingredients, and orchestrating their preparation with precision and heart can create a culinary composition that delights the senses and elevates a common dish to extraordinary heights.\nSeptember - Sparrow and Wolf # A hidden gem of Vegas, skip the fancy casino buffet and celebrity chef spots and come here instead! Their prix fixe menu is typically 8 different dishes that changes regularily based on seasonality. Some highlights form what we tried:\nOxtail hummus—a revelation that redefines traditional hummus. The richness of stewed oxtail harmonized with the creamy chickpea base, elevating it to an indulgent, savory delight that leaves an unforgettable impression.\nFoie Gras Chashu Bahn Mi—a playful twist on a classic. The opulence of foie gras meets the succulent chashu in a fusion of textures and flavors that dance gracefully on the taste buds, delivering an indulgent and innovative experience.\nOctopus Confit—an epitome of culinary finesse. Tender and succulent, it embodies meticulous preparation and artistry, offering a delicate balance of flavors and kick of spice that delightfully surprised with each bite.\nOctober - Angler‌ # I first heard of this place from David Chang\u0026rsquo;s podcast a couple years ago. Angler is a sea-life focused Michelin-starred restaurant from Saison group. The embered oysters and parker house rolls were a deadly delicious combo that will knock your socks off. The uni and trout roe rice was so buttery and briny in the best possible way and was the surprise highlight of our night. The grilled sea bream was a bit dry for our taste but was made up for by an amazing vermouth butter sauce. I wouldn\u0026rsquo;t recommend the grilled hen of woods mushroom personally. It was cooked well but the sauce was too reminiscent of a franks red hot sauce.‌‌‌‌Overall an amazing experience with impeccable service. Seeing the chefs cook in the open concept kitchen was also a total delight!\nNovember - Kokkari Estiatorio # A SF institution that lives up to the hype. This classic Greek restaurant is named after a small fishing village on the island of Samos and is the sister restaurant of the acclaimed Evvia Estiatorio in Palo Alto. You\u0026rsquo;ll be greeted by a cozy cabin-like interior adorned with a welcoming fireplace and extensive woodwork making you feel right at home.\n‌‌‌‌Lamb and fresh seafood were a definite must order when you\u0026rsquo;re here. Their lamb shanks were cooked to perfection. Simple, light but packed with flavor. The sea bass was also delightful, offered grilled or steamed. We found the grilled skin was a bit too charred making it slightly overwhelming, given the fishes more delicate flavor.‌‌‌‌The surprise of the night was the home made grilled pita with Melitzanosalata, Favasalata and Tirokafteri. The pita had an amazingly crispy exterior but was still fluffy and light. This was the first time I\u0026rsquo;ve tried favasalata which was amazingly light but still packed a punch in terms of flavor.‌‌‌‌This meal exceeded all expectations and I\u0026rsquo;m already looking forward to coming again.\nDecember - ILCHA # The soy-marinated shrimp at ILCHA, a rare culinary gem in SF. Imagine succulent shrimp, delicately marinated in a luscious soy marindate that creates a perfect balance of salty and savory notes. Each bite encapsulates a harmonious blend of umami-rich soy, gently infusing the shrimp with layers of depth and a hint of sweetness. What sets ILCHA\u0026rsquo;s soy-marinated shrimp apart is the meticulousness of the marinade, which not only enhances the natural sweetness of the shrimp but also imparts a tantalizing complexity that elevates the dish to an unforgettable dining experience. And don\u0026rsquo;t forget the rice! The perfectly cooked Koshihikari rice and egg provides a perfect backdrop for all the fatty goodness of the shrimp heads.\n","date":"1 January 2024","externalUrl":null,"permalink":"/posts/favourite-restraunts-of-2023/","section":"Posts","summary":"","title":"My Favourite Restaurants of 2023","type":"posts"},{"content":"\u0026lsquo;Festive Trip\u0026rsquo; is an invitation to embrace the magic of the moment, to revel in the joyous experience of exploration and discovery. I hoped to capture the essence of joyful ski adventures through an endless spiral of vibrant, alternating trees, reminiscent of a kaleidoscope of colors.\n","date":"26 December 2023","externalUrl":null,"permalink":"/posts/festive-trip/","section":"Posts","summary":"","title":"Festive Trip","type":"posts"},{"content":"","date":"26 December 2023","externalUrl":null,"permalink":"/categories/fractals/","section":"Categories","summary":"","title":"Fractals","type":"categories"},{"content":"","date":"26 December 2023","externalUrl":null,"permalink":"/tags/fractals/","section":"Tags","summary":"","title":"Fractals","type":"tags"},{"content":"I set a goal for myself to learn how to make sushi over 2023. Let\u0026rsquo;s see how I progressed!\nApril 22 - Tamago Sushi # View this post on Instagram A post shared by Ben (@benu_sushi)\nApril 29 - Katsuo, Ikura and Salmon Nigiri # View this post on Instagram A post shared by Ben (@benu_sushi)\nMay 7 - Negitoro, Ebi, Salmon Nigiri \u0026amp; Unagi Handrolls # View this post on Instagram A post shared by Ben (@benu_sushi)\nMay 13 - Sushi Rolling 101 with Chef Mark Gyotoku. Hosomaki and California Rolls # View this post on Instagram A post shared by Ben (@benu_sushi)\nMay 20 - Shime Saba and Salmon Hakozushi # View this post on Instagram A post shared by Ben (@benu_sushi)\nMay 28 - Hotate Scallop Nigiri \u0026amp; Salmon, Takuan Maki # View this post on Instagram A post shared by Ben (@benu_sushi)\nJune 10 - Salmon Aburi # View this post on Instagram A post shared by Ben (@benu_sushi)\nJune 18 - Salmon (Aburi or Regular) and Ikura # View this post on Instagram A post shared by Ben (@benu_sushi)\nJuly 2 - Shrimp and Smoked Salmon Hakozushi # View this post on Instagram A post shared by Ben (@benu_sushi)\nJuly 7 - Chutoro Nigiri and Salmon Hakozushi # View this post on Instagram A post shared by Ben (@benu_sushi)\nAug 8 - Unagi Nigiri # View this post on Instagram A post shared by Ben (@benu_sushi)\nAug 13 - Mosaic Sushi Attempt 1 # View this post on Instagram A post shared by Ben (@benu_sushi)\nAug 19 - Flower Sushi and Mosaic Attempt 2 # View this post on Instagram A post shared by Ben (@benu_sushi)\nAug 26 - Triple Tomoe sushi, Mosaic Sushi Attempt 3and Tamago Nigiri # View this post on Instagram A post shared by Ben (@benu_sushi)\nSept 1 - Salmon, Ikura and Mosaic Attempt 4 # View this post on Instagram A post shared by Ben (@benu_sushi)\nOct 7th - Salmon and Mosaic Attempt 5 # View this post on Instagram A post shared by Ben (@benu_sushi)\nNov 5th - Abstract Flower Sushi # View this post on Instagram A post shared by Ben (@benu_sushi)\nNov 26 - Picnic Gimbap # View this post on Instagram A post shared by Ben (@benu_sushi)\nDec 2 - Salmon # View this post on Instagram A post shared by Ben (@benu_sushi)\nDec 3 - Negitoro and Cucumber Maki # View this post on Instagram A post shared by Ben (@benu_sushi)\nDec 7 - Sushi Demo at Work # View this post on Instagram A post shared by Ben (@benu_sushi)\nDec 9 - Mosaic Attempt 6 # View this post on Instagram A post shared by Ben (@benu_sushi)\nDec 23 - Homemade Sushi Platter - Green Dragon Roll, Black Dragon Roll, Mosaic Sushi, Spam Musubi, Unagi Nigiri and Maki # View this post on Instagram A post shared by Ben (@benu_sushi)\n","date":"25 December 2023","externalUrl":null,"permalink":"/posts/sushi-making-progression-2023/","section":"Posts","summary":"","title":"Sushi Making Progression 2023","type":"posts"},{"content":"‘Pink Dream’ invites viewers to immerse themselves in the intangible, to explore the connections that weave through our existence. Color palette inspired by Annie.\n","date":"20 December 2023","externalUrl":null,"permalink":"/posts/pink-dream/","section":"Posts","summary":"","title":"pink dream","type":"posts"},{"content":"**Instructor(s): ** Zsolt Kira **Course Page: ** Link\nCS-7643 is a foundational course for the OMSCS Machine Learning specialization. It largely follows the curriculum of similar Deep Learning courses like CS231n from Stanford, going through the evolution of deep learning through Computer Vision lens. We cover Image Classification, Perceptrons, Backpropogation, Convolutional Neural Networks, Recurrent Neural Networks, Deep Reinforcement Learning and finally Attention and Transformers.\nThe most interesting part of the course was special topics covered by researchers from Meta AI. We also had the chance to meet them virtually during office hours and ask questions regarding their research. I was able to meet some of the collaborators from on the No Language Left Behind translation model that can translate from over 200 different languages!\nI took the summer variation of the course which has 1 less assignment than the regular 4 during the Fall/Spring semesters. The course still culminated in a final open-ended group project. Each assigment consisted of basic proof or excerises from the past weeks lecture topics, a paper review, and a coding assignment with a substantial experimental and analysis component. I\u0026rsquo;ll briefly cover the most interesting parts of only the papers I reviewed as assignment contents are still confidential.\n1) Weight Agnostic Neural Networks # The paper demonstrates a novel network search algorithm that can solve a given machine learning problem without any explicit weight training. They demonstrate that this method is able to find minimal architectures that can solve reinforcement learning tasks like 2D bipedal walking and driving. They also demonstrate it can find architectures to solve supervised learning problems like MNIST digit classification. The results of this research seem deeply connected to learning and evolution. It appears to signal that our brains may not actually be giant general purpose learning machines and the neural architecture of our brains may bias ourselves towards specific ways of learning. In my mind, this could even indicate different modes of thought or reasoning that are beyond our human comprehension due to limitations of our existing brain structure and architecture.\nWe’ve also seen this in the history of deep learning innovations where novel architectures seem to be the triggering point for large improvements in performance (RNN, LSTM, CNNs, Transformers, Diffusion Networks, etc\u0026hellip;) We are also constrained by the limitations of our search algorithms as our best method remains gradient descent optimization. New search algorithms could potentially unlock further innovations in deep learning.\n2) Taskonomy: Disentangling Task Transfer Learning # In this paper, Zamir et al. the authors explore the structure and relationships between different visual learning tasks via transfer learning. They use a fully computational approach to model the relationships between twenty six different semantic tasks such as finding surface normal, 2D segmentation, edge detection, etc\u0026hellip; The authors also demonstrate that taxonomy transfer generalises to novel tasks that are not in their trained task dictionary and they also train the taxonomy on other datasets to show that the what they found is generalizable. This leads the reader to conclude that there is an inherent structure in visual tasks that are being learned by the neural networks and that this structure can be used to model redundancies across different tasks and reused via transfer learning.\nThis study seems to indicate that deep neural networks are capable of learning high level features or concepts that roughly map to our own perceived relationships or actual physical relationships between different visual tasks (surface normal to depth maps via derivative). In order to learn where to transfer from a new learning task, we may want to leverage our own prior knowledge to find networks that were trained on similar tasks on a conceptual level.\n3) Do Vision Transformers See Like Convolutional Neural Networks? # This paper identifies key structural differences in the features learned from Resnet based CNN’s versus Vision Transformers (ViTs). They identify that ViTs are better at incorporating global information than ResNets at lower layers. The paper identifies key parts of the transformer architecture that lead to such performance such as the importance of information flow through skip connections and how global average pooling vs CLS token helps maintain spatial localization.\nMy personal takeaway is that the historical motivations for convolutional neural networks and finding operations that mimic our ‘common sense’ understanding of existing vision systems are likely incorrect. The paper suggests that the power of attention and transformers in representing global features is more important and can lead to potentially better performance. Similar to my learnings from Paper 1, network architecture is probably the largest contributing factor of a model\u0026rsquo;s representational power. Future research should focus on different architectures that can further improve model performance.\nFinal Project - AlphaZero \u0026amp; Connect 4 # For my final project, I worked with another classmate to study an implementation of AlphaZero that could play Connect 4. The most fascinating part of this model is that it leverages Deep Reinforcement learning and self-play to learn how to play the game. This means that all the concepts it learns were self-taught without any human input or bias!\nAlphaZero architecture and MCTS Search We did an abaltion study to understand how much it architecture and reliance on Monte-Carlo Tree Search affected performance. We also explored using linear probes to see if we could tease out if it was learning any specific game concepts from training. I hope to find some free time in the coming months to continue in that vein of research.\n","date":"30 October 2023","externalUrl":null,"permalink":"/posts/cs-7643-deep-learning/","section":"Posts","summary":"","title":"CS-7643 Deep Learning","type":"posts"},{"content":"","date":"30 October 2023","externalUrl":null,"permalink":"/categories/machine-learning/","section":"Categories","summary":"","title":"Machine Learning","type":"categories"},{"content":"","date":"30 October 2023","externalUrl":null,"permalink":"/tags/machine-learning/","section":"Tags","summary":"","title":"Machine Learning","type":"tags"},{"content":"Back in December, Hugging Face released an eight unit course [https://huggingface.co/deep-rl-course/unit1/introduction] covering the fundamentals of Deep Reinforcement Learning. The course covers fundamental theories of Deep RL, core libraries and gives you hands-on experience training your own agents in unique environments ranging from classical control problems all the way to video games like Space Invaders and even Doom!\nAs opposed to a more classical graduate course like OMSCS\u0026rsquo;s CS-7642 [https://ben-yu.com/cs-7642-reinforcement-learning/], this course puts a larger emphasis on major advancements in the past couple of years that deep learning techniques have introduced to the field. The course covers the following topics:\nQ-Learning, Deep Q-Learning and MC vs TD Learning Policy Gradient with REINFORCE Actor-Critic Methods Multi-Agent Reinforcement Learning Proximal Policy Optimization I\u0026rsquo;ll try to highlight the portions of the course that I found the most interesting or were particularily unique to this course.\nUnits 1-3: Q-Learning, Deep Q-Learning and MC vs TD Learning The course first formulates the reinforcement learning problem and the basic paradigms of solving RL problems. We first focus on two paradigms within the Model-free RL algorithms: Policy-based Methods vs Value-based methods.\nTaxonomy of RL Algorithms (OpenAI - Spinning Up [https://spinningup.openai.com/en/latest/spinningup/rl_intro2.html])We start off with Value-Based methods, where we want to learn a value function that maps state to it\u0026rsquo;s expected value. The course reviews the Bellman Equation, [https://huggingface.co/deep-rl-course/unit2/bellman-equation]which defines how one can recursively calculate the value of any given state. We then briefly look at two major learning paradigms of training value-based methods: Monte Carlo Learning - where you update your value function based on an entire episode of data or Temporal Difference (TD) Learning - where we update our value every n-steps.\nThe course then builds on this to introduce Q-Learning, which is an off-policy value-based method with TD(0) learning. We implement Q-learning from scratch and solve some basic OpenAI gym problems like Frozen Lake and Taxi Driving.\nOne major limitation of Q-learning is it\u0026rsquo;s a tabular method which stores a value for each state-action pair, which is very memory intensive for problems with high state and action space dimensionality. The key innovation with the advent of deep learning is we can now approximate the Q-table with a deep neural network.\nDQN adds several tricks to enable better generalization: Memory Replay and a Value/Target NetworkSimilar to my work in Georgia Tech\u0026rsquo;s CS-7642 course, we implement our own DQN network based onMnih et al. [https://www.nature.com/articles/nature14236] Their paper introduces the concept of Experience Replay, which acts as a buffer to store previous experiences and lets the network train on a larger range of samples, rather than the sequential experiences it gets during a normal training episode. Minh et al. also adds the concept of a target network, which helps stabilise training. With a single network we are both shifting the Q-values and the TD target with each update. By having a separate network, we can model the TD target separately and avoid oscillations during training.\nMy Deep Q-Network Agent playing Space InvadersUnderstanding these concepts, we train a agent levering stable-baselines3 to play Space Invaders!\nUnit 4: Policy Gradient with REINFORCE A second approach to Reinforcement learning is to try to learn the policy function itself rather than approximating through a value function. To do this we parameterize the policy, typically modelling it as a probability distribution over a set of actions (stochastic policy). You can now model the policy with a function or neural network and optimise the function by maximising the performance of the policy using gradient ascent.\nTODO: Write-up on Policy Gradient Theorem Derivation\nTo explore Policy Gradient methods, we first implement the REINFORCE algorithm which is a basic Monte-Carlo approach that estimates your return through multiple sample trajectories.\nREINFORCE agent solving Cart Pole and playing Flappy Bird/PixelcopterWe then train two agents to solve the classic Cart Pole problem and also Flappy Bird!\nUnit 6: Actor-Critic Methods One downside to the REINFORCE algorithm is it\u0026rsquo;s high variance since it relies on Monte-Carlo sampling. To mitigate this you need to sample over a large number of trajectories, which reduces sample efficiency and is cost prohibitive.\nOne methodology that tries to combat this is the Actor-Critic process, which attempts to combine both Policy-based and Value-based methods. You train an Actor which tries to learn our policy function, and also a Critic, which can also learn a value function which assists the policy by measuring how good each action was taken. By knowing how good each policy update is, we reduce gradient variance.\nA2C agent solving robotics/control problems like learning to walk or manipulating a robotic arm!We then train two agents to solve several classic control problems from the PyBullet and Panda-Gym with Stable Baselines 3 and A2C.\nUnit 7: Multi-Agent Reinforcement Learning We take a brief detour into the world of Multi-Agent RL. The courses treatment of this problem space is very brief compared to Georgia Tech CS-7642\u0026rsquo;s coverage, as we only briefly review several common implementation patterns. There\u0026rsquo;s no exploration of the Game Theory underpinnings of this problem space.\nMulti-Agent SoccerThe most interesting part of this course is the introduction of Self-Play and leveraging multiple copies of our agent to learn an train itself. We briefly look into the MLAgents library which leverages the Unity Game Engine to train agents in pre-made environments. The library predetermines matches between different copies of our Agent based on their ELO. It the continually matches agents against each other and progressively each agent should start gradually improving and learning from the process!\nUnit 8: Proximal Policy Optimization Any Deep RL course wouldn\u0026rsquo;t be complete without looking at Proximal Policy Optimization (PPO), a state-of-the-art RL policy optimization algorithm. It\u0026rsquo;s model-free, few hyperparameters and typically performs very well on most RL problems out of the box.\nPPO and it\u0026rsquo;s algorithmic brethren approach the RL problem by trying to converge on an optimal policy by avoiding to have too many large policy updates during training. This is primarily motivated by empirical observations that smaller updates tend to converge to an optimal solution and larger policy updates lead to \u0026ldquo;falling off a cliff\u0026rdquo; and having no chance of recovering to a previously better policy. PPO achieves this by enforcing a constraint on it\u0026rsquo;s objective function:\nTODO: insert clip objective function\nWe implement our own version of PPO with CleanRL as a reference implementation. We check our implementation on the classic Lunar Lander problem:\nSolving Lunar Landar with PPO - 2M timestepsFinally to demonstrate the versatility of PPO, we try out PPO with the SampleFactory [https://www.samplefactory.dev/09-environment-integrations/vizdoom/] library and train an agent to play Doom!\nPlaying a simplified DOOM level with PPOConclusion \u0026amp; Next Steps 100% Completion!Completing all 8 units and having your 12 models pass the required benchmark will reward you with a Certificate of completion. Have all your assignments pass at 100% will get you an honors certificate!\nThis course only briefly explored the world of Reinforcement Learning. For myself, I\u0026rsquo;m going to explore multi-agent systems, explore building a RHLF system and applications with LLMs, read up on Decision Transformers [https://huggingface.co/deep-rl-course/unitbonus3/decision-transformers?fw=pt] and play around with MineRL [https://minerl.io/]. I\u0026rsquo;d like to also explore building my own game adapter like integrating Mupen64 and Starfox 64 into a RL training library.\n","date":"9 April 2023","externalUrl":null,"permalink":"/posts/hugging-face-deep-rl-course/","section":"Posts","summary":"","title":"Hugging Face Deep RL Course","type":"posts"},{"content":"","date":"9 April 2023","externalUrl":null,"permalink":"/categories/python/","section":"Categories","summary":"","title":"Python","type":"categories"},{"content":"","date":"9 April 2023","externalUrl":null,"permalink":"/tags/python/","section":"Tags","summary":"","title":"Python","type":"tags"},{"content":"It\u0026rsquo;s been more than 2 years since I was last at Zen Japanese Restaurant. We opted for the $180 CAD Sushi Omakase course. Like last time, it was 13 pieces of nigiri, but with the addition of appetizers and dessert!\nAppetizer Course - Abalone with Caviar and cucumber jelly salad. Fried Lobster with snow pea. Duck breast with miso daikon Tuna Sea Bream with a dusting of yuzu Chutoro Shrimp Trout Amberjack Amberjack and Flounder. The flounder had an amazing texture that was soft but rubbery and chewy at the same time. Tuna Belly Scallop Uni Skipjack Eel - Favourite piece of the meal. The flavour and texture was almost like birthday cake Tuna Handroll - The nori was amazingly crunchy and crisp Dessert - Melon, Mochi with Red Bean and Strawberry and Mango Sorbet ","date":"31 December 2022","externalUrl":null,"permalink":"/posts/zen-japanese-restaraunt-2022/","section":"Posts","summary":"","title":"Zen Japanese Restaurant - 2022","type":"posts"},{"content":" Dec 2022 Oct 2022 Mar 2022 Jan 2022 2020 2018 ","date":"28 December 2022","externalUrl":null,"permalink":"/posts/sushi-making-progression/","section":"Posts","summary":"","title":"Sushi Making Progression 2018-2022","type":"posts"},{"content":" Ramping back up with three.js.\nInspiration:\nLink: https://denim-jungle-blinker.glitch.me/\n","date":"28 December 2022","externalUrl":null,"permalink":"/posts/kusama-semi-infinite-mirrors-2/","section":"Posts","summary":"","title":"Kusama Semi-Infinite Mirrors","type":"posts"},{"content":"**Instructor(s): ** Charles Isbell / Michael Littman **Course Page: ** Link\nCS-7641 is a core course for the OMSCS Machine Learning specialization. It serves as a introduction to reinforcement learning, and a continuation of CS-7641 Machine Learning\nAt the time of writing, the course consists of 3 major written assignments, 6 homework assignments and a final exam. The course follows Richard Sutton\u0026rsquo;s RL Book very heavily, as do most undergraduate/graduate courses nowadays. The assignments were the main highlight of the course and are designed to mostly be open-ended and force you to demonstrate your understanding of the material. You are challenged to write a technical paper (6 page max) usually either solving a particular reinforcement learning problem or replicating a key result in RL research.\nAssignment 1 - Temporal Difference Learning # You are tasked with replicating key results from Sutton\u0026rsquo;s seminal 1988 paper on temporal difference learning methods. We basically need to show that TD(λ) is more efficicient that perceptron learning. This intuitively makes sense as we\u0026rsquo;re now updating our agent continuously rather than waiting for the final outcome label. We also run several experiments looking at the trade-offs between different lambda parameters. As with most things in ML, there\u0026rsquo;s a tradeoff decision with setting lambda, as you want to balance how far you look into the future and how fast you propogate learnings to your agent.\nAssignment 2 - Lunar Lander # We get to apply our learnings to solve harder and more state of the art toy learning problems. You are tasked with solving OpenAI\u0026rsquo;s Lunar Lander environment. Your agent needs to land a 2D lander without crashing. Your lander has left/right and upward thrusters and you\u0026rsquo;re rewarded if you land safely and softly within the target area.\nA successful run of my lunar lander agent To solve this problem we implement we leverage Deep Q-Networking and implement a DQN agent with action replay. This technique was first introduced an popularized by DeepMind researchers Mnih et al. back in 2015. I essentially replicated their algorithm verbatim from their paper in PyTorch (we are restricted from using any existing libraries like rl-baselines).\nAssignment 3 - Football # The problems get harder! We are now tasked with solving a multi-agent reinforcement learning problem. In this assignment we\u0026rsquo;re given a modified version of Google\u0026rsquo;s Football environment, and we\u0026rsquo;re tasked with training an agent that can play 3v3 football. If you thought traning one agent was already difficult, you know have the added problem of training several agents that have to co-ordinate and interact with your environment together. The goal is to demonstrate an improvement in agent behaviour compared to 3 provided baseline algorithms.\nMy Agent learning how to pass and shoot My paper ended up investigating how centralized critic methods improve learning performance and potentially help agents better co-ordinate with each other.\nConclusion\nCS-7642 has been one of the more challenging and rewarding courses I\u0026rsquo;ve taken in OMSCS. Definetly complement your learning with other RL courses from other universities. Most notably I watched David Silver\u0026rsquo;s RL Lectures. Berkley\u0026rsquo;s Deep RL course was also extremely helpful for understanding current state of the art algorithms that weren\u0026rsquo;t covered heavily in the lecture material like Deep Q-Learning and PPO. Reinforcement learning is very facintating field that\u0026rsquo;s advancing very quickly. Most interstingly it played a pivotal part in ChatGPT\u0026rsquo;s recent success, which relied on RL wit Human Feedback for it\u0026rsquo;s training.\nI\u0026rsquo;ll be continuing my learning journey into the Spring as I take HuggingFace\u0026rsquo;s Deep RL course. See you then!\n","date":"26 December 2022","externalUrl":null,"permalink":"/posts/cs-7642-reinforcement-learning/","section":"Posts","summary":"","title":"CS-7642 Reinforcement Learning","type":"posts"},{"content":"I try to recreate my favourite dishes from Benu - a 3 Michelin star restaraunt in San Francisco and one of The World\u0026rsquo;s 50 Best Restaurants back in 2019. Head chef Corey Lee draws from many different cuisines, with a focus on Korean and Cantonese techniques and flavours.\nI could only find frozen mackerel and had to fry it for it be edible. I also couldn\u0026#39;t figure out the correct medley of vegetables and what they used for the outer wrap. Just gave up on this one :( Attempt #1 - Couldn\u0026#39;t find jellyfish. Just did a light tempura batter and garnished with \u0026#39;leaves\u0026#39; Pear marinade on the steak worked very well. Glaze on the baby anchovies could have been sweeter.Not sure how thye made a sauce of that consistency so I made a chimichurri with scallion and basil Abalone - Steamed mine in oyster sauce and scallion \u0026amp;amp; garlic rather than basting in butter Attempt #2 - Jellyfish added a great textural component. Still couldn\u0026#39;t quite get the batter right. This time it was too thick Milk Pudding - I couldn\u0026#39;t get the same consistency and I skipped the peat jam/sauce ","date":"25 December 2022","externalUrl":null,"permalink":"/posts/ben-yu-does-benu/","section":"Posts","summary":"","title":"Ben Yu does Benu","type":"posts"},{"content":"","date":"1 November 2022","externalUrl":null,"permalink":"/posts/dreams-of-blade-runner/","section":"Posts","summary":"","title":"Dreams of Blade Runner","type":"posts"},{"content":"**Instructor(s): ** David Joyner **Course Page: ** Link\nThis class was simultaneously an introductory course about educational technology and an advanced, project-oriented class on designing or researching technology’s intersection with education. The course provides student\u0026rsquo;s with information about a large number of topics within educational technology, including pedagogical strategies, research methodologies, current tools, open problems, and broader issues.\nCOURSE HIGHLIGHTS # CS-6460 was an extremely open-ended course, so you\u0026rsquo;ll likely get as much out of this course as you put into it. You\u0026rsquo;re given the option to either pursue one of three tracks:\nDevelopment - Work on a project/tool that will improve educational technology\nResearch - Conduct research on some field in educational technology, typically some form of study or survey of MOOCs\nContent - Develop your own course material and/or MOOC\nFor myself, I chose to pursue a combination of both the development and research task, using this course as a structured format for myself to learn more about Natural Language Processing and it\u0026rsquo;s application to educational technology\nCourse Gotchas # Start early especially if you\u0026rsquo;re taking this course during the summer semester! Look ahead at the assignments and prepare ahead of time. There is a substatntial amount of writing in the first couple of weeks and you\u0026rsquo;ll be reading ALOT of papers. If you already have a topic or project you want to tackle, structure your research and preperation before the course starts. It\u0026rsquo;ll make your life alot easier since you can spend more time on development/research There is a course participation component. You should be able to get full marks through just completing regular peer feedback. Make sure you review how the points are calculated so you know at a minimum how much points you\u0026rsquo;re currently at. The course instructor will provide you snapshots every month, but it\u0026rsquo;ll also help to have a mental model of where you should be at any point in the semester Research Topic: Multi-Document Summarization # For my research topic, I investigated the problem of multi-document summarization of medical research for literature reviews as part of a shared task for the workshop on Scholarly Document Processing 2022. The goal of the task was to build a machine learning model that could be applied to any set of medical research documents and generate a succinct summary that is understandable by a medical researcher. This task used two datasets of review summaries derived from the scientific literature [1][2]. Participating teams were then evaluated using automated and human evaluation metrics.\nI wasn’t able to make any improvements on the dataset benchmark, but I was able to establish some evidence that current summarization metrics are insufficient in measuring summarization accuracy. I also built a small web tool to demonstrate the viability of summarization models for future investigators. Luckily enough my work was accepted into the workshop and presented at the workshop proceedings at COLING 2022!\n","date":"17 October 2022","externalUrl":null,"permalink":"/posts/cs-6460-education/","section":"Posts","summary":"","title":"CS-6460 Educational Technology","type":"posts"},{"content":"Robin delivers a unique and refreshing take on sushi in an already crowded space where innovation is both difficult and sometimes even frowned upon. It\u0026rsquo;s clear from the both their selection of ingredients, drink selection, ambience and decor that the team wants to present modern 21st century take on sushi that also emphasises the rich local ingredients in San Francisco. As with most high-end establishments, in the city, Robin only offers an omakase-only menu, ranging from $99 to $199. You can\u0026rsquo;t go wrong with any price point, as it only changes the types of pieces they\u0026rsquo;ll offer and you can order more as you go.\n1) Halibut Sashimi with pineapple and broth 2) Wagyu Tartare on a Toasted Nori chip topped with Uni and Asian Pear 4) Trout and Snow Pear 6) Black Cod 7) Tuna with Beef Butter 8) Green Apple Amberjack 9) Spanish Mackerel Diakon, Chives and chili oil Robin had a real emphasis more fatty and western favourite pieces like tuna, trout, uni and caviar. There was a real emphasis on using sauces and toppings to either elevate or balance out the fish. Some particular highlights:\nSteak Butter with Maguro - The butter was real suprise and you could tase the steak at the back of your palette. Would love to just have that butter on every other piece\nMackerel with chili oil - I love mackerel and it\u0026rsquo;s usually great on it\u0026rsquo;s own. The chili oil aded another layer that I wasn\u0026rsquo;t expecting and was a very playful piece\n1) Scallop 3) Hokkaido Uni with Caviar 4) Curry and Papaya with Cod 5) Maguro with black garlic sauce 7) King Salmon with Truffle 8) Sesame Noodle with Truffle 9) Sturgeon Caviar, Aioli \u0026amp;amp; Potato CHIP?! The papaya cury salad with cod was an unexpected treat and a great palette cleanser. I wasn\u0026rsquo;t expecting such flavour contrasts given the visual presentation of the dish\nKing Salmon with Truffle?! - A tad over-the-top, but an amazing piece. I mean\u0026hellip; it\u0026rsquo;s truffle\nNoddle with Truffle - Another Robin Signature. Umami-bomb is the name of the game here. They should make a ramen out of this\nPotato Chip with Caviar - THE Robin Signature. Probably the most unique sushi piece I\u0026rsquo;ve ever had. The chip really hits you up-front, but you get the caviar and aioli comes through at the end\n1) Corn Miso Soup 2) Wagyu in Foie Gras Snow 3) Coconut Soft Serve dessert w chocolate chips, walnuts, caramelized banana puree 4) Bourdain! Corn Soup - Simple and amazing. Reminds me of my childhood and home\nA5 Wagyu and Foie Gras Snow - Perfect summary of Robin. Perfect simplicity in an amazing cut of beef and then drown it in decadence. The beef an foie melt in your mouth and was an amazing piece to wrap up the meal.\nI really appreciated how Robin pushes boundaries and presents a modern-take on sushi with a focus on bold flavours and balance through sauces/toppings and the timing. Their omakase menu showed alot of thoughfulness and understanding of the ingredients they\u0026rsquo;re presenting and pushes our conception of what\u0026rsquo;s possible within the medium of sushi.\n","date":"2 October 2022","externalUrl":null,"permalink":"/posts/robin/","section":"Posts","summary":"","title":"Robin","type":"posts"},{"content":"As part of a half-hearted joke with my team\u0026rsquo;s intern, we promised to take him to a restaraunt of his choosing if he completed this project and presentation on time. In true Twitch fashion, the restaraunt would have to be memeable so we decided to go to Benu with Ben Yu.\nDishes # 皮蛋: Quail Egg, potage of cabbage, cream \u0026amp; bacon, ginger - The Benu Classic. The egg itself wasn\u0026rsquo;t very alkaline. The potage was amazingly flavourful Mussel stuffed with vegetables and glass noodles Pig ear with jellyfish, radishes and chives Lawn Roll - Mackerel, kelp, and vegetables. Very suprising dish with many different textures and flavours happening all at once. One of my favourites King Prawn wrapped in Jellyfish and topped with angelica tree leaves - 2nd favourite dish of the night. The batter was very airy and the prawn was fried to perfection Tofu flower, chili oil and chicken consommé - a very light and flavourful palette cleanser. Chili oil had some kick and the tofu was super soft and slippery Taco tribute to acorns - Iberico Ham, Truffle \u0026amp;amp; Acorn - One of Corey\u0026#39;s favourite flavor combos. The whole dish was a massive umami bomb. The stone was also warmed which was a nice touch. My favourite dish of the night! Oat bread with honey, butter, ginseng 小籠包 with homemade soy sauce and vinegar: Another Benu Classic. The skin was paper thin and the lobster consomme and butter and explodes in your mouth with flavour Jasmine Rice cooked in a gamasot with blood sausage and aged kimchi - Low key might be the best fried rice I\u0026#39;ve ever eaten Whole abalone roasted in butter Water chesnut cake with dried rockfish, chinese mustard greens and three mustard sauce Roast turbot glazed in spicy fermented pepper sauce, braised chysanthemum and radish, fried garlic flowers - The turbot was preppared such that you can experience both the belly, cheek and fillet mulhwe with iced water kimchi broth, cured sea bream, sea urchin, oysters, radish, sesame leaf and seaweed Charcoal-grilled beef rib braised with pear and chili, baby anchovies, chilled lettuce and scallion sauce Pork Two Ways: 1) steamed pork belly thinly sliced and served chilled with hot mustard dipping sauce and korean melon dongchimi 2) charcoal grilled pork cheek with tomato meju, preserved ramps and lettuce wraps Omija and Olive Oil - experience five different flavours at once - sweet, salty, savoury, bitterness and spice Milk Pudding with salt, smoke and peat Iced Barley Tea Mint Curls This was truly a life-changing meal. The dishes were very playful while still respecting the asian cultural heritage from which they were inspired. My favourite dishes would have to be:\nAcorn, Iberico Ham and truffle taco\nKing Prawn wrapped in Jellyfish\nMackrel, kelp and vegetables roll\n小籠包\nKimchi Fried Jasmine Rice cooked in a gamasot\nStay tuned as I try to replicate these dishes in my new blog series: Ben Yu copies Benu!\n","date":"29 August 2022","externalUrl":null,"permalink":"/posts/ben-yu-benu/","section":"Posts","summary":"","title":"Ben Yu @ Benu","type":"posts"},{"content":"**Instructor(s): ** Charles Isbell / Michael Littman **Course Page: ** Link\nCS-7641 is a core course for the OMSCS Machine Learning specialization. It serves as a introduction to three core fields of study in machine learning:\nSupervised Learning Unsupervised Learning Reinforcement Learning Prof. Isbell takes a nuanced approach with teaching this course, emphasizing synthesis over rote memorization. As such, the course work is fairly open-ended, which allows students to demonstrate their understanding of the material. At the time of writing, the course consists of 4 major written assignments, 2 problem sets, a midterm and a final exam. The assignments are where you\u0026rsquo;ll likely 90% most of your time during the course. Each assignment requires you to experiment with various machine learning algorithms on datasets or your choosing, and for you to synthesize your learnings and insights into a a detailed paper (usually up to 10 pages). Though the rubric is hidden from students, its typically pretty fair and upfront if one completes the assigned lectures and attends office hours.\nCOURSE HIGHLIGHTS # With 16 weeks of course material, this course covers alot of ground. Make sure you keep up with the weekly lectures, lest you fall behind like I did! I\u0026rsquo;ll try to highlight the most interesting concepts that aren\u0026rsquo;t usually covered in other ML MOOCs.\nLazy Decision Trees\nThe most common formulation of Decision Trees you encounter are typically trained using a greedy algorithm that tries to maximize information gain with each node most commonly known as the ID3 Algorithm. It\u0026rsquo;s actually entirely possible that we could develop the exact reverse of this algorithm, where instead of greedily determining the best nodes, lazily evaluate and build the tree at the point of inference. Lazy decision trees (Friedman, Kohavi \u0026amp; Yun 1996) have the additional benefit of making the best split for each test instance and wouldn\u0026rsquo;t make unnecessary splits that a greedy algorithm would. The tradeoff, as we explored with many algorithms in this course, is this doesn\u0026rsquo;t scale very well with large datasets, since each inference would require you to split over the entire dataset which would be very time or memory intensive.\nBoosting RARELY overfits\nOne surprising result from boosted learners (where we take an ensemble of weak learners and combine their results to produce a stronger learner), is that with even with many iterations and adding alot of different learners, boosting still very rarely overfits. Although we don\u0026rsquo;t go over the actual mathematical derivation of this result, Prof. Isabell explains that this results from boosting progressively increasing the error margin of already correctly labelled examples. With this tendency to maximize this margin, training accuracy will only increase.\nWeak Learners only help improve the margin How does the Kernel Trick work?\nHow do you separate non-linearly separable data with only linear separators? You map it to a higher dimensional space where you can separate them!\nCan\u0026rsquo;t be seperated in 2D space, but seperable in 3D! To avoid building an explicit map to a newer higher dimensional space, we can rely on the specific class of functions called kernel functions that can be expressed as an inner product of a lower dimensional space.\nComputational Learning Theory\nThe most interesting section of the course was definitely learning about Probably Approximately Correct (PAC) Learning, which is a framework for defining computational complexity with machine learning. Just like in Algorithms class, we can derive bounds on an algorithms generalization error and complexity we can determine if a problem should be learnable. Generally we can define a concept class as PAC-learnable by learners L using the hypothesis class H if and only if: L will, with probability 1 − δ, output a hypothesis h ∈ H such that error D(h) ≤ ε with polynomial time and sample complexity in 1/ε, 1/δ, n (or in layman\u0026rsquo;s terms, a concept is learnable if it can be learned to a reasonable degree of accuracy and within a reasonable amount of time).\nWe were introduced to Haussler\u0026rsquo;s Theorem which provides a bound on the number of data samples you need for a problem to be PAC-learnable relative to the size of your hypothesis space:\nThis presents a problem since most hypothesis spaces are actually infinite in size! To handle infinite spaces, we turn to the concept of Vapnik–Chervonenkis dimensions, which measures a function\u0026rsquo;s ability to \u0026lsquo;shatter\u0026rsquo; or split points. With this concept in hand, it turns out we can put a bound on sample sizes even with infinite hypothesis spaces:\nA hypothesis space H is PAC-learnable if and only if its VC dimension is finite!\nRandom Optimization and No Free Lunch Theorem\nIt can be shown that for any optimization problem, on average no other strategy is expected to do better than any other. This means that to do any better than random search, your algorithm will need have have some inherent bias that makes it more suited for that particular optimization problem.\nClustering Impossibilitiy Theorem\nWe can view clustering algorithms as having three fundamental properties.\nRichness - For any assignment of clusters, we can define some distance metric where that cluster can be found Scale Invariance - Scaling distances by a constant doesn\u0026rsquo;t affect clusters Consistency - Shrinking or expanding distances does not change clusters It can be shown that there exists no clustering function that can satisfy all three properties!\nThe BEST Classifier\nFor every classification problem there exists a globally optimal classifier that you can\u0026rsquo;t outperform known as the Bayes Optimal Classifier? It is the classifier that has the lowest probability of misclassifying a datapoint on average. This means that no other classification method can outperform the BOC on average, with the same hypothesis space and prior knowledge! This is useful for quantifying feature importance and relevance for feature preprocessing.\nRandom Projections\nRandomly projecting your dataset can be just as effective as other dimensionality reduction algorithms like PCA and ICA. This is largely due to the Johnson-Lindenstrauss lemma states that large sets of vectors in a high-dimensional space can be linearly mapped in a space of much lower (but still high) dimension n with approximate preservation of distances.\nReinforcement Learning and Game Theory\nI still don\u0026rsquo;t fully understand these part of the lectures, but since both fields model problems as Markov Decision Processes, we can apply techinques from Reinforcement Learning, like Q-Learning onto things from Game Theory like General Sum Stochastic Games.\nConclusion\nCS-7641 is a great introductory course to Machine Learning. Prof. Isbell really pushes you to truly understand the material more than just on the surface level. The written assignments force you to truy digest and synthesize the material and appreciate the core mathematical underpinnings and challenges of machine learning. If I were to summarize everything I learned in this one meme it would be:\n# ","date":"21 December 2021","externalUrl":null,"permalink":"/posts/cs-7641-machine-learning/","section":"Posts","summary":"","title":"CS-7641: Machine Learning","type":"posts"},{"content":" #5 Ippudo - Double soup with dashi and tonkotsu broth. Noodles were perfectly al-dente and chewy. Chashu was thinly sliced and cooked very well. Only downside is the ramen egg was an additional charge. #4 - Taishoken - Classic tsukemen. I was actually expecting a more flavourful broth and was pretty underwhelmed by the quality. Pork slices and ramen egg were great. #3 - Hinodeya - I\u0026rsquo;m a sucker for dashi broth. Their portions are very generous which always wins extra points in my book. Pork is actually thicker which I actually prefer. Menma and ramen egg were on point. My go-to ramen spot in SF that doesn\u0026rsquo;t completely destroy your wallet. #2 Mensho - The gods of paitan broth. This place is an SF favourite with lines consistently wrapping around the block. Their duck paitan (白湯 - white broth) was so rich and creamy without being too overly salty The noodles were cooked well and ramen egg was luxurious. A tad pricy, but worth the wait.\u0026amp;nbsp; #1 Ramen Nagi - Undisputed king of Bay Area ramen. Their Palo Alto location is usually the most consistent from my experience. You can\u0026#39;t go wrong with any of their ramens, but I love their Red King. Every component is top tier and the portions are huge. Their veggie ramen is an actual revelation. The broth has so much umami, and adding a hashbrown adds so much depth and texture to the dish. I\u0026#39;ve since stolen this technique for my homemade late night ramens. ","date":"1 September 2021","externalUrl":null,"permalink":"/posts/top-5-ramen-spots-in-the-bay-area/","section":"Posts","summary":"","title":"My Top 5 Ramen Spots in the Bay Area","type":"posts"},{"content":"**Instructor(s): ** Tucker Balch, Ph.D. / David Joyner **Course Page: ** Link\nCS-7646 is another introductory course into machine learning-based trading strategies. The course is broken into 3 major components:\nManipulating financial data with Pandas Finance Fundamentals: CAPM, Techincal Analysis, Options, Modern Portfolio Theory \u0026amp; Mean-Variance Analysis Machine Learning Techniques: Decision Trees, Reinforcement Learning \u0026amp; Q-Learning I took this course during the Summer semester, which shortened the project and exam timelines from a regular semester. In terms of course work, there was 8 assignments (1 due each week) and 2 exams.\nCourse Highlights # Learning about Finance Fundamentals was the major highlight of this course. With the whole recent debacle around WallStreetBets/GME and increasing popularity of cryptotrading, I\u0026rsquo;ve been meaning to learn more about technical analysis and brush up on the basics of trading.\nThe course reviews basics like market metchanics, exchanges \u0026amp; order books, option trading strategies, technical indicators. Capital Asset Pricing Model (CAPM), Efficient Market Hypothesis and Modern Portfolio Theory.\nCandlesticks to the moon! Armed with those fundamentals, the final project has you build your own ML Trader against historical data compared to a manual strategy that you construct yourself!\nSample iteration of my ML Trader vs Benchmark based on 2011-2012 historical data Overall Assessment # A great course if you have interest in finance or machine learning with time-series data. The coursework isn\u0026rsquo;t very demanding if you already have some background with Python and libraries like Pandas, Matplotlib and Numpy.\nPros:\nCoursework isn\u0026rsquo;t very demanding, 5-10 hours a week at most (depends on the project difficulty) Lectures were well structured and had good production quality Interesting course material if you\u0026rsquo;re into Finance You get to watch \u0026lsquo;The Big Short\u0026rsquo; for school! Cons:\nSome of the earlier assignments were tedious. Mostly learning Pandas, figuring vectorization, etc\u0026hellip; I would have liked more emphasis on the Machine Learning sections like diving deeper into reinforcment learning, decision trees and more state of the art techniques used today, like deep neural nets and recurrent nets Exam questions were a bit tedious. I think a quiz format would have suited the course material a bit better. ","date":"23 August 2021","externalUrl":null,"permalink":"/posts/cs-7646-machine-learning-for-trading/","section":"Posts","summary":"","title":"CS-7646: Machine Learning for Trading","type":"posts"},{"content":"Instructor(s): Aaron Bobick/ Irfan EssaCourse Page: Link\nThis course provides a gentle introduction to computer vision covering a wide range of topics including:\nFeature Detection \u0026amp; Extraction: Hough Transforms, SIFT descriptors \u0026amp; RANSAC Camera Models and Stereo Geometry Motion Models - Hierarchical Lucas and Kanade Kalman and Particle Filters for Motion Tracking Convolutional Neural Networks Overall this project was very heavy in-terms of project work and course material, with easily 20+ hours of work/studying every week. The recorded lectures are done every well with high production quality and a decent amount of depth (you\u0026rsquo;ll need to brush up on your Linear Algebra and Calculus). In total there were 6 projects, 1 final project and 1 exam, which can be pretty demanding since there\u0026rsquo;s a non-trivial amount of coding needed to be done for each assignment every two weeks.\nThe individual projects were definetly the highlight of this course, where you\u0026rsquo;re expected to implement and learn how to use core Computer Vision algorithms to solve a toy/real-life problem. You\u0026rsquo;ll be well-versed in OpenCV by the end of the course! I\u0026rsquo;ve listed some of the projects that I found to be the mosting interesting:\nEdge and Object Detection: # Using Hough Transforms, we can detect shapes by transforming edges into a parameter space and use a simple voting procedure to figure out if it\u0026rsquo;s likely a feature. Hough transforms work well for parameterized shapes like lines or circles (like our traffic signs), but can also be generalized to any shape.\nDetected Stop Sign with Hough Transform Used in combination with some fairly basic image pre-processing, we implemented some fairly basic traffic sign detection, since signs are mostly composed of simply shapes like triangles, circles and octagons.\nFeature Tracking: # A more robust way to do tracking is to use template matching against a known pattern. Utilizing markers placed in a real scene, we can use this to calculate their location relative to a video camera and find a homography or projective transform\nThis let\u0026rsquo;s us project a video onto the scene, implementing some very basic Augmented Reality!\nOptical Flow: # We implement basic motion detection/optical flow detection by implementing an iterative Lucas-Kanade algorithm with gaussian pyramids:\nFinal Project - Convolutional Neural Networks: # Digit Classification with a ResNet As a final project, we had free reign to implement multiple different advanced computer vision algorithms. I chose to explore training a Deep Residual Network on The Street View House Numbers (SVHN) Dataset, performed a comparitive study on the architecture\u0026rsquo;s performance compared to a simpler convolutional neural network and also a CNN built from ImageNet through transfer-learning.\n","date":"20 July 2021","externalUrl":null,"permalink":"/posts/cs-6476-computer-vision/","section":"Posts","summary":"","title":"CS-6476: Computer Vision","type":"posts"},{"content":"Instructor(s): Maria KonteCourse Page: Link\nAs the name entails, CS-6250 is an introductory course to Computer Networking covering a wide range of topics from the evolution of the internet, basic routing algorithms, software-defined networking, internet security, CDNs and modern applications like VoIP video and IoT.\nCourse Highlights # The course is structured in weekly segments, capped off with a mandatory quiz. At a high-level, we covered the following topics:\nNetworking OSI Model Transport Layer - TCP and UDP Intradomain Routing - Distance Vector \u0026amp; Open Shortest Path First (OSPF) Algorithms Interdomain Routing - Border Gateway Protocol and Internet Exchange Points Router Design Software Defined Networking Internet Security, Surveillance \u0026amp; Censorship Video, CDNs and Overlay Networks The reading material was suprisingly thorough and was far superior to the recorded lecture content. The weekly quizzes were a good forcing function to keep you ontop of the required material and prepared you very well for the two exams.\nProjects # The project work was hit-or-miss for me personally. Some were tedious configuration-like homework like configuring a firewall, or doing analysis of BGP historical data to find hijacks an route leak events. Personally, the following projects were the interesting:\nDistributed Minimum Spanning Tree\nWe take the standard algorithm problem of a minimum spanning tree, and implement it with a messaging protocol! Prim\u0026rsquo;s and Kruskals won\u0026rsquo;t work here since they both assume we can process each node at one time and we know the whole state of the graph. Thinking through the message data structure and message broadcast logic was slightly tricky, but interesting!\nDistance Vector Routing\nHow do we updating our network\u0026rsquo;s routing tables? Implement a distributed Bellman-Ford algorithm! Similar to the MST project, there were some tricky edge cases to watch out for, but was a super interesting and practical algorithm to implement from scratch\nBGP Hijacking Simulation\nWe simulated a BGP Hijack attack with Mininet to force a rerouting of our own website to an attackers website. Another practical assignment that helps demonstrate some real-world problems network providers face everyday.\nOverall Assessment # Computer Networks was my second/third course in the OMSCS program (taken in parallel with Computer Vision). It provided a good introduction to networking which was helpful for me personally, since I\u0026rsquo;ve never formally taken any networking in my undegraduate program. Overall, I think I spent ~2-3 hours per week reviewing course material. Assignment work hours varied by difficulty with the routing assignments taking up the most time.\nPros:\nNot super-demanding in-terms of hours on assignments and quizzes Broad coverage of networking concepts. Content seems to have been revamped and improved upon from previous semesters Cons:\nQuiz grading wasn\u0026rsquo;t always consistent. Alot of gotcha questions with inconsisten wording Certain sections of the course wasn\u0026rsquo;t very in-depth. Would have preferred more depth into more interesting sections like CDN design, video streaming, etc\u0026hellip; In summary, this was a decent course to start a Computing specialization. If you\u0026rsquo;re looking for something more challenging, look at the more advanced course offerings.\n","date":"21 June 2021","externalUrl":null,"permalink":"/posts/cs-6250-computer-networks/","section":"Posts","summary":"","title":"CS-6250: Computer Networks","type":"posts"},{"content":"What better way to celebrate my 30th birthday than going to a Michelin star restaraunt! Saison has been on my bucket list for quite sometime, and fortunately during as COVID restrictions were winding down in San Francisco, they started offering patio dining with a more limited menu with a more approachable price!\nSaison is a New American restaraunt opened by the famed Joshua Skenes back in 2009 with an emphasis on open fire cooking. In a few short years it earned it\u0026rsquo;s famed three Michelin stars and launched Skenes into chef superstardom. Although Skenes left his famed establishment back in 2019, Saison still keeps to it\u0026rsquo;s focus on local season ingredients and highlighting the complexities of open hearth cooking.\nWe were served an amazing 9-course menu that highlighted local Californian ingredients:\nSaison Reserve Caviar - salsify and lettuces Amberjack - passion fruit Trout - cordyceps Sweet Potato - fermented greens Sourdough Brioche - miso butter Duck with grilled hearts \u0026amp; gizzars, duck sausage with nasturtium leaf, preserved shinko pear, fermented cabbage and bone broth A5 Waygu - chyrysanthemum Champagne and Gooseberry Sunchoke choolate and rosemary The first course was caviar. Caviar is always amazing. The poached lettuces was cooked to perfection\nThe amberjack dish was light, subtly smokey and refreshing. The passionfruit and succulent leaves added suprising complexity and textural contrast to a suprisingly simple dish.\nThe third course was hay-smoked trout with pork broth and cordyceps. Again the hearth comes through in the fish and the cordyceps add some much needed contrast to the delicate and rich trout \u0026amp; broth.\nThe highlight of the night was defineltly their fourth course, sourdough brioche and smoked sweet potato. I\u0026rsquo;ve never had such pillowy and soft brioche bread, and the complexity of the sourdough really sings through. When combined with a healthy spread of their indulgent miso butter and you\u0026rsquo;re in bread heaven. The smoked sweet potato was really suprising and knocked our socks off with flavour and complexity. The potato was soft but crips and the fermented greens really helped highlight how smoky and delighfully rich flavour of the potato. It really suprised us how much flavour the smoke imbued into the potato as was really exemplfied the spirit of Saison.\nThe main course was exemplary and was a textbook example of cooking with the whole animal and truly respecting your ingredients. We had a dry-aged smoked five spice duck, which was cooked to perfection. The fermented cabbage and pear was a perfect complement to help cut through the richness of the duck. The grilled gizzards were amazing and suprisingly not very irony that you\u0026rsquo;d usually associate with innards. The duck sausage was again super rich and flavourful. The bone broth helped tie together this incredibly rich course with a clean finish.\nAs an extra for my birthday, I opted to add a wagyu beef course. Probably the best beef I\u0026rsquo;ve had in my life! It was so soft, rich and fatty, it would literally melt in my mouth with each bite.\nThe sixth course was a light refreshing break before desert. We sere served a gooseberry and champagne sorbet of sorts with shaved ice on top. The gooseberry was definetly unique and I enjoyed it\u0026rsquo;s nice acidic tang.\nAnd for the finale, we had dessert. I had the chocolate-sunchoke, which was cookie crust which a cookie crust, with chocolate mouse and a caramelized creme topping. Not the most mind-blowing dessert I\u0026rsquo;ve ever had, but it was a good way to finish the meal.\nSaison was an unforgetable dining experience. Their brioche sourdough was to die for and I\u0026rsquo;ll still be sing praises of that simple sweet potato dish until I die of old age. Definetly looking forward to visiting again and experience what their full menu has to offer.\n","date":"11 June 2021","externalUrl":null,"permalink":"/posts/saison/","section":"Posts","summary":"","title":"Saison","type":"posts"},{"content":"After being holed up at home for almost a year, we decided to celebrate Valentine\u0026rsquo;s Day with some Michelin star sushi! Considering the pieces were prepared well in advance and survived a harrowing drive through Downtown SF traffic, the quality of the sushi was suprisingly great!\nMy favourite pieces:\nKing Salmon with Cherry Leaf complemented each other and helped highlight the natural sweetness of the salmon The handroll was extremely flavourful and luxurious. Wished they filled the whole box with just crab, roe and uni! Zuke Toro - Soy Cured Fatty Tuna with Caviar Hotate - Hokkaido Scallop Hon Maguro - Bluefin Tuna Kanpachi - Amberjack Sake - King Salmon with Cherry Leaf Tai - Red Snapper with Mustard Miso Sawara - King Mackerel with Garlic Momiji Zuke Sake - Soy Marinated King Salmon Ikura/Uni/Crab Handroll ","date":"14 February 2021","externalUrl":null,"permalink":"/posts/ju-ni-covid-edition/","section":"Posts","summary":"","title":"Ju-Ni : Covid Edition","type":"posts"},{"content":"Instructor(s): Jay Summet / Sebastian ThrunCourse Page: Link\nCS-7638 is an introductory course that covers basic techniques used in robotics. Throught its 16-week span, the instructors cover various techniques/algorithms used in the field of robotics such as:\nBayes Filters: Histogram, Kalman \u0026amp; Particle Filters PID (Proportional–Integral–Derivative) Controllers Path Finding - A* and Dynamic Programming SLAM (Simultaneous Localization and Mapping) Bayes Filters # We first look at the general problem of state estimation from the perspective of localization (how does a robot figure out where it is in its environment). One way to tackle this problem is to model the robot\u0026rsquo;s belief in its location as a probability. The robot can model it\u0026rsquo;s state belief as a probability distribution and update it over time and as it gathers measurements about it\u0026rsquo;s surrounding:\nThe algorithm generally becomes a matter of recursively updating it\u0026rsquo;s state belief based on new control data u and measurement data z:\nThe challenge with implementing such an algorithm lies in modeling the beliefs and state transition probabilities. In order to make the problem tractable, we\u0026rsquo;ll look at several approximations that are more easily computable, but can still effectively model the problem.\nKalman Filters # The most common and best studied approximation is the Kalman Filter. Invented by Rudolph Emil Kalman in the 1950s, Kalman Filters allow for filtering and predicting linear systems. It represents it\u0026rsquo;s belief using Gaussians, which drastically improves its simplicity and computation efficiency.\nThe computation efficiency is mostly due to it representing it\u0026rsquo;s belief by a multi-variate Gaussian distribution (a mean and uncertainty covariance). This drastically simplifies the state estimation and update step complexity, but restricts us to representing linear uni-modal states.\nAs our first coding project, we implement a 2D Kalman Filter to estimate asteroid positions, such that a spaceship could navigate to the end of a congested 2D asteroid field:\nProject 1 - Estimating Asteroid Positions with Kalman Filters Particle Filters # Contrasting Kalman filters, there are non-parametric approaches to estimate the robot\u0026rsquo;s belief state. A common approach in the field of robots is the Particle Filter, which approximates the posterior with a finite number of samples. This will never truly represent the state space, but does have the benefit of being nonparametric, and can represent a much broader selection of distributions (such as non-linear functions)\nThe major trick in implementing particle filters is how we perform the sampling in lines 8-11. When we retake samples to approximate the posterior, we draw samples with replacement according to an importance weight. Since the weight corresponds to the posterior probability, particles with a higher probability are more likely to be drawn. After enough iterations, the N samples we take should approximate the posterior.\nFor our 2nd project, we implement a 2D Particle Filter to estimate a gliders position given a terrain map and it\u0026rsquo;s height. The glider can then estimate it\u0026rsquo;s current position and navigate to predetermined landing location:\nProject 2 - Glider Position Estimation with Particle Filters Proportional–Integral–Derivative (PID) Controllers # # After exploring different implementations\nof Bayes filters, the course moved onto examining control systems and methods for handling errors within a robotic control system. Sensors and actuators have error bounds and noise in their measurements and actuations, so robotics systems will typically need to employ some form of modulation/control to handle the error and move the system into its desired state. One of the most commonly used techniques is the proportional–integral–derivative controller.\nA PID controller has three components:\nProportional Error: Minimized direct error Integral - Measures error bias and drift accumulated over time Derivative - Avoids overshoot and rate of error change. Attempts to flatten the error trajectory Tuning these three parameters in a feedback control system is typically a very powerful tool for most robotics systems and are used in applications ranging from self-driving cards, industrial engineering, temperature regulation etc\u0026hellip;\nProject 4: Rocket Fuel PID Controller Simultaneous Localization and Mapping (SLAM) # Nearing the end of the course, we finally look at one of the most fundamental problems in robotics, the simultaneous localization and mapping problem (SLAM). The problem happens when a robot does not have access to a map of it\u0026rsquo;s environment and also doesn\u0026rsquo;t have access to it\u0026rsquo;s current pose/state. The robot must simultaneously acquire it\u0026rsquo;s map, while also localizing itself relative to it\u0026rsquo;s acquired map. Objects may be landmarks in feature-based representation, or they might be object patches detected by range finders. When an object is detected, a SLAM algorithm must reason about the relation of this object to previously detected objects. This reasoning is typically discrete: Either the object is the same as a previously detected one, or it is not.\nThere are various approaches to solving SLAM, such as using Kalman Filters or other Information Filters. Constraints on the robots measurements of landmarks and its current state is represented by an information matrix:\nInformation Matrix which represents constraints of map landmarks and robot\u0026rsquo;s state A key insight to the SLAM problem, is that the information matrix is typically very sparse and the strength and importance of a features is typically related to it\u0026rsquo;s distance. This means that we could solve SLAM with an online approach where we only look at currently nearby features, which allows for a constant-time state update. As a final project, we implemented a simplified 2D version on online Graph SLAM with a robot having to pick up gems from random locations.\nProject 5: Online GraphSLAM Overall Assessment # AI4R was my first course in the OMSCS program, and served as an easy introduction to the program and the growing field of robotics. Overall, I think I spent ~4-5 hours per week reviewing course material and working on the quizzes and assignments.\nPros:\nInteresting course content for folks that are interested in AI and Robotics Sebastian\u0026rsquo;s lecture content was fun and engaging Not super-demanding in-terms of hours on assignments and quizzes Cons:\nCourse is really an introductory course, and teaches the concepts at a superficial level. Students will need to rely on the textbook if they want a more rigorous mathematical derivations of each algorithm In summary, this was an amazing introductory course for anyone just starting on their OMSCS journey and a great way to dive into the world of Artificial Intelligence and Robotics.\n","date":"10 January 2021","externalUrl":null,"permalink":"/posts/artificial-intelligence-for-robotics/","section":"Posts","summary":"","title":"CS-7638: Artificial Intelligence for Robotics","type":"posts"},{"content":"","date":"10 January 2021","externalUrl":null,"permalink":"/categories/math/","section":"Categories","summary":"","title":"Math","type":"categories"},{"content":"","date":"10 January 2021","externalUrl":null,"permalink":"/tags/math/","section":"Tags","summary":"","title":"Math","type":"tags"},{"content":"","date":"10 January 2021","externalUrl":null,"permalink":"/categories/robotics/","section":"Categories","summary":"","title":"Robotics","type":"categories"},{"content":"","date":"10 January 2021","externalUrl":null,"permalink":"/tags/robotics/","section":"Tags","summary":"","title":"Robotics","type":"tags"},{"content":" Squash Rosettes with Carrot Puree and Truffle Aioli\nFrench Omelette\nBeef Wellington\nRack of Lamb\nSourdough Bread It took two weeks to grow the starter from scratch!\nCantonese Roast Pork Belly The crispy skin was to die for\nBon Apetit Perfect Roast Turkey\nIvan Orkin Shio Ramen\nBasque Burnt Cheesecake\n","date":"5 July 2020","externalUrl":null,"permalink":"/posts/cooking/","section":"Posts","summary":"","title":"Cooking","type":"posts"},{"content":"","date":"3 July 2020","externalUrl":null,"permalink":"/categories/aws/","section":"Categories","summary":"","title":"Aws","type":"categories"},{"content":"","date":"3 July 2020","externalUrl":null,"permalink":"/tags/aws/","section":"Tags","summary":"","title":"Aws","type":"tags"},{"content":"","date":"3 July 2020","externalUrl":null,"permalink":"/categories/data-warehousing/","section":"Categories","summary":"","title":"Data Warehousing","type":"categories"},{"content":"","date":"3 July 2020","externalUrl":null,"permalink":"/tags/data-warehousing/","section":"Tags","summary":"","title":"Data Warehousing","type":"tags"},{"content":"","date":"3 July 2020","externalUrl":null,"permalink":"/categories/golang/","section":"Categories","summary":"","title":"Golang","type":"categories"},{"content":"","date":"3 July 2020","externalUrl":null,"permalink":"/tags/golang/","section":"Tags","summary":"","title":"Golang","type":"tags"},{"content":"If you\u0026rsquo;re working within the AWS Cloud ecosystem, you\u0026rsquo;ve probably had to work with AWS Redshift, which is the de-facto solution for data warehousing and supporting business intelligence, reporting, data, and analytics tools.\nHere are 3 tips you should keep in mind when working with Redshift as your team\u0026rsquo;s data-warehouse:\nPrimary key and Foreign Key Constraints are not enforced Like other clustered database systems, AWS Redshift only uses primary and foreign keys as planning hints for certain statistical computations. Your ETL logic or source tables should be enforcing these constraints. Redshift is essentially a distributed column store and the main reason why it\u0026rsquo;s able to batch writes and massive select and joins so quickly is because it doesn\u0026rsquo;t need to handle uniqueness constraints from primary keys and indexes.\nISO-8601 and Timestamp Precision An easy mistake to make with storing timestamps is to assume that storing timestamps as ISO-8601 strings, will automatically make it convertible between different databases, languages etc\u0026hellip;\nFor example with the Go standard library supports nano-second precision:\npackage main\nimport ( \u0026ldquo;fmt\u0026rdquo; \u0026ldquo;time\u0026rdquo; )\nfunc main() { t := \u0026ldquo;2020-07-01T01:23:45.999999999Z\u0026rdquo; res, err := time.Parse(time.RFC3339Nano, t) if err == nil { fmt.Println(res) } }\n2020-07-01 01:23:45.999999999 +0000 UTC\nProgram exited.\nIf you stored this into Redshift and tried to cast it back into a DATE with something like created_at::date or TIMESTAMP you\u0026rsquo;d get the following error:\norg.postgresql.util.PSQLException: ERROR: date/time field value out of range\nLooking at AWS\u0026rsquo;s documentation [https://docs.aws.amazon.com/redshift/latest/dg/r_Datetime_types.html] we see that Redshift only supports 1 microsecond of precision!\nTo prevent issues like this, teams should ensure that their ETL code understands the source table schemas and do the appropriate transformations. Specifically for DynamoDB, even though DynamoDB itself only supports microsecond resolution, you should defensively check the string\u0026rsquo;s precision with a rtrim or use to_date(created_at, \u0026lsquo;YYYY-MM-DDTHH:MI:SS\u0026rsquo;).\nTeams should also access what type of precision they need with their data. In most cases, it\u0026rsquo;s extremely rare that your system will actually need to be measuring time at a nanosecond precision. At those levels, you\u0026rsquo;re at the mercy of system clock errors [http://www.ntp.org/ntpfaq/NTP-s-sw-clocks-quality.htm], clock drift, etc\u0026hellip;\nSQL Three-Valued Logic This one isn\u0026rsquo;t specific to Redshift, but is more of a tricky SQL gotcha. Lets say you have this sample query:\nSELECT * from some_table WHERE name NOT IN (\u0026lsquo;a\u0026rsquo;, \u0026lsquo;b\u0026rsquo;, \u0026lsquo;c\u0026rsquo;)\nwhere name is nullable. Would you expect this to returns results where name is NULL? The query will not return any rows where name is NULL due three-valued logic. NULL is a special case where we designate data is absent and missing. From the context of a boolean expression, if we have the expression:\nNULL != \u0026lsquo;a\u0026rsquo;\nThree-value logic defines the result of this as unknown since we don\u0026rsquo;t actually know the value of the data. WHERE clauses must require conditions to evaluate to true.\nThe correct query would then be:\nSELECT * from some_table WHERE name IS NULL OR name NOT IN (\u0026lsquo;a\u0026rsquo;, \u0026lsquo;b\u0026rsquo;, \u0026lsquo;c\u0026rsquo;)\n","date":"3 July 2020","externalUrl":null,"permalink":"/posts/aws-redshift-gotchas/","section":"Posts","summary":"","title":"Redshift SQL Gotchas","type":"posts"},{"content":"Painting and Generative Stuff\n","date":"16 December 2019","externalUrl":null,"permalink":"/posts/art/","section":"Posts","summary":"","title":"Art","type":"posts"},{"content":"Alot of the crypotgraphic software we use relies on hashing and it\u0026rsquo;s uniqueness guarantees to provide core functionality, such asauthentication, message integrity, message fingerprinting, data corruption detection, etc\u0026hellip; Guaranteeing this uniqueness however is fundamentally impossible following from the Pigeonhole Principle if your input data size is going to be larger than your maximum size of your key space. At some point you\u0026rsquo;re guaranteed to have two different sets of data that will map to the same key. So just how frequently will these types of collisions occur?\nGiven n randomly generated values, where each is a k-bit integer, what is the probability that any two of them are equal? To calculate this probability, it might be simpler to first look at the probability of no collisions happening, since:\n$$P(Any~Collision) = 1 - P(No~Collisions)$$If we have a k-bit integer, that means it\u0026rsquo;s maximum value is N = 2^k. The probability of one number not colliding with another would be:\n$$N-1/N$$since we have N-1 remaining that are unique. If we generate another value, the probability then becomes:\n$$N-1/N \\times N-2/N$$by the multiplicative rule of probability. Extending this to n different values, we get:\n$$N-1/N \\times N-2/N \\times ... \\times N-n/N$$Factoring out the N to simplify:\n$$e^{1-1/N} \\times e^{1-2/N} ... \\times e^{1-n/N} $$If N \u0026gt;\u0026gt; n^2, we can apply the approximation that 1-x \u0026lt; e^-x:\n$$e^{-1/N} \\times e^{-2/N} \\times ... \\times e^{-n/N} $$Simplifying the products we get:\n$$e^{-1/N\\sum_{j=0}^{n-1}j}$$And reducing the sum we get:\n$$e^{\\frac{-n(n-1)}{2N}}$$So now that we have a usuable approximation for the probability of a hash collision not happening, to the probability of a collision does happen becomes:\n$$P(Any~Collision) = 1 - e^{\\frac{-n(n-1)}{2N}}$$With this we can get a general sense of just how often we can expect a collision given a certain value size.\nBits Output Size # of hashes to get for 50% chance of collsion 16 ~6.5 x 104 300 32 4.3 × 10^9 77,000 64 ~1.8 × 10^19 5.1 × 10^9 128 ~3.4 × 10^38 2.2 × 10^19 256 ~1.2 × 10^77 4.0 × 10^38 512 ~1.3 × 10^154 1.4 × 10^77 ","date":"18 August 2018","externalUrl":null,"permalink":"/posts/hash-collisions/","section":"Posts","summary":"","title":"Birthday Attack","type":"posts"},{"content":"","date":"18 August 2018","externalUrl":null,"permalink":"/categories/cryptography/","section":"Categories","summary":"","title":"Cryptography","type":"categories"},{"content":"","date":"18 August 2018","externalUrl":null,"permalink":"/tags/cryptography/","section":"Tags","summary":"","title":"Cryptography","type":"tags"},{"content":"Zen is arguably the best sushi restaraunt in the Greater Toronto Area. They offer an Omakase menu with consists of 13 pieces of nigiri and handroll, which will set you back around $120.\nMy personal highlights were:\nSea Perch with Shiso leaf - the flavour and textural contrasts were onpoint A5 Wagyu Beef - need I say more? Mind-blowing handroll! ","date":"17 March 2018","externalUrl":null,"permalink":"/posts/zen-japanese-restaurant/","section":"Posts","summary":"","title":"Zen Japanese Restaurant","type":"posts"},{"content":"Chef Shiro Kashiba started Seattle\u0026rsquo;s first sushi bar in 1970 after years of grueling training alongside the world renown sushi chef Jiro Ono in Tokyo. In 2015, Shiro opened his new place Sushi Kashiba in the heart of Pike Place.\nI was lucky enough to get bar seating on a work trip to Seattle back in 2017. The pieces were impeccable with some more modern takes like seared otoro and salmon. My most memorable piece was definetly the anago, which had the most amazing glaze, texture and flavour I\u0026rsquo;ve ever tasted with eel.\n","date":"15 December 2017","externalUrl":null,"permalink":"/posts/sushi-kashiba/","section":"Posts","summary":"","title":"Sushi Kashiba","type":"posts"},{"content":"As a die-hard Bourdain fan, I had to make a trip to Sushi Bar Yasuda. Making reservations in Japan is usually pretty difficult as a foreigner since most places will only accepts reservations through a conceirge. Luckily Yasuda was one of the rare places in Tokyo that had an online reservation system.\nWasabi ain\u0026rsquo;t no horseradish If you\u0026rsquo;re looking for traditional Edo-style sushi, you\u0026rsquo;ll need to check your expectations at the door. Chef Yasuda blazes a new path with non-traditional ingredients like oysters, geoduck and even scallion sprouts! He demonstrated a real mastery of his craft by specifically ordering each piece in the Omakase menu to highlight the subtle nuances in each type of fish. I don\u0026rsquo;t think I ever really appreciated the subtlties of the different species of uni, until I tried 3 different variations back to back!\nThe biggest suprise of the meal I think was actually the quality of the sushi rice. During the course of the meal, Yasuda explained that the true key to sushi, among many small details was the contrast between the rice and the fish. There must be a contrast between the fish and rice\u0026rsquo;s temperature, ideally the fish being midly cold and the rice at almost body temperature. Yasuda\u0026rsquo;s sushi rice was also packed alot looser than what I\u0026rsquo;ve normally seen, and what a difference that makes! The looseness preserves the integrity of the rice granules leaving a much better mouth feel and flavour that I\u0026rsquo;ve yet to seen replicated anywhere else.\nFor me, my favourite pieces were:\nMackerel Oyster - super non-traditional with a unique flavour and mouthfeel 3 variations of Uni ranging from creamy, sweet and briny Unagi - grilled ontop of charcoal as he preps the other pieces Amberjack Skipjack Pike Mackrel New England Fatty Tuna Mackerel Chinook Salmon Oyster! Mackerel #2 Ebi Uni #1 - Short Spike from Russia Uni #2 Scallop Sardines Uni #3 Otoro Red Clam Geoduck Squid Shrimp King Salmon Roe Snow Crab Herring Miso Soup Scallion Sprouts Conger Eel Unagi #2 Otoro Roll ","date":"19 August 2017","externalUrl":null,"permalink":"/posts/sushi-bar-yasuda/","section":"Posts","summary":"","title":"Sushi Bar Yasuda","type":"posts"},{"content":"","date":"18 April 2016","externalUrl":null,"permalink":"/categories/fivethirtyeight/","section":"Categories","summary":"","title":"Fivethirtyeight","type":"categories"},{"content":"","date":"18 April 2016","externalUrl":null,"permalink":"/tags/fivethirtyeight/","section":"Tags","summary":"","title":"Fivethirtyeight","type":"tags"},{"content":"","date":"18 April 2016","externalUrl":null,"permalink":"/categories/riddle/","section":"Categories","summary":"","title":"Riddle","type":"categories"},{"content":"","date":"18 April 2016","externalUrl":null,"permalink":"/tags/riddle/","section":"Tags","summary":"","title":"Riddle","type":"tags"},{"content":"","date":"18 April 2016","externalUrl":null,"permalink":"/categories/space/","section":"Categories","summary":"","title":"Space","type":"categories"},{"content":"","date":"18 April 2016","externalUrl":null,"permalink":"/tags/space/","section":"Tags","summary":"","title":"Space","type":"tags"},{"content":"Another FiveThirtyEight Riddler.The President has bestowed upon you 1 billion dollars with the mission of getting us to an alien artifact as fast as possible! Here\u0026rsquo;s whats available to you:\nBig Russian engines costing 400 million each. Buying one will reduce the trip time by 200 days. Buying two will save another 100 days.\nNASA ion engines. There are only eight of these 150 million large-scale engines in the world. For each 150 million fully fueled xenon engine you buy, you can take 50 days off of the trip.\nLight payloads send ahead of time. For 50 million each, you lighten the main mission and reduce the arrival time by 25 days.\nWhat\u0026rsquo;s the best strategy?\nThis boils down to an integer programming problem, where we have several variables that with several constraints (in our case the 1 billion dollar budget) and are integer-valued, and we want to maximize some objective function, (in our case reducing our arrival time). So we get two equations:\nBudget Constraint: $$400x\\_0+800x\\_1+150x\\_2+50x\\_3 \\leq 1000$$Maximizing arrival time: $$ max(200x\\_0 + 300x\\_1 + 50x\\_2 + 25x\\_3) $$And scipy has a solver!\nfrom scipy.optimize import linprog c = [-200, -300, -50, -25] A = [400, 800, 150, 50] b = [1000] x0_bounds = (0, 1) x1_bounds = (0, 1) x2_bounds = (0, 8) x3_bounds = (0, None) res = linprog( c, A_ub=A, b_ub=b, bounds=(x0_bounds, x1_bounds, x2_bounds, x3_bounds), options={\u0026#34;disp\u0026#34;: True} ) print res Optimization terminated successfully. Current function value: -500.000000 Iterations: 4 fun: -500.0 message: \u0026#39;Optimization terminated successfully.\u0026#39; nit: 4 slack: array([ 0., 0., 1., 8.]) status: 0 success: True x: array([ 1., 0., 0., 12.]) So the solution is to buy one Russian rocket and send out 12 light payloads ahead of time, and we save 500 days!\n","date":"18 April 2016","externalUrl":null,"permalink":"/posts/space-race/","section":"Posts","summary":"","title":"Space Race","type":"posts"},{"content":"Another weekly FiveThirtyEight riddler, and this one\u0026rsquo;s a mind-bender.\nThree very skilled logicians are sitting around a table — Barack, Pete and Susan. Barack says: “I’m thinking of two natural numbers between 1 and 9, inclusive. I’ve written the product of these two numbers on this paper that I’m giving to you, Pete. I’ve written the sum of the two numbers on this paper that I’m giving to you, Susan. Now Pete, looking at your paper, do you know which numbers I’m thinking of?”\nPete looks at his paper and says: “No, I don’t.”\nBarack turns to Susan and asks: “Susan, do you know which numbers I’m thinking of?” Susan looks at her paper and says: “No.”\nBarack turns back to Pete and asks: “How about now? Do you know?”\n“No, I still don’t,” Pete says.\nBarack keeps going back and forth, and when he asks Pete for the fifth time, Pete says: “Yes, now I know!”\nFirst, what are the two numbers? Second, if Pete had said no the fifth time, would Susan have said yes or no at her fifth turn?\nIt appears that they both shouldn\u0026rsquo;t have enough information to deduce what the two numbers are. How is it possible that only after five turns, they could exchange enough information to pinpoint the number pair?\nWhen Pete first says \u0026lsquo;I don\u0026rsquo;t know\u0026rsquo;, that means with the product he\u0026rsquo;s been given isn\u0026rsquo;t unique in the space of all possible product pairs from [1,9]. So Pete can eliminate all pairs that have unique products.\nSarah, following the same logic also doesn\u0026rsquo;t see a unique sum, and can eliminate all pairs that have unique sums.\nIterate this for 5 turns and we get the following logic:\nfrom collections import Counter lower_bound = 1 upper_bound = 9 remaining_pairs = set((a, b) for a in range(lower_bound,upper_bound + 1) for b in range(a, upper_bound + 1)) for i in range(0,4): print \u0026#34;Pete - \u0026#39;I don\u0026#39;t know\u0026#39;\u0026#34; _prod_counts = Counter(a*b for a,b in remaining_pairs) remaining_pairs = set((a,b) for a,b in remaining_pairs if _prod_counts[a*b]\u0026gt;1) print \u0026#34;Sarah - \u0026#39;I don\u0026#39;t know\u0026#39;\u0026#34; _sum_counts = Counter(a+b for a,b in remaining_pairs) remaining_pairs = set((a,b) for a,b in remaining_pairs if _sum_counts[a+b]\u0026gt;1) _prod_counts = Counter(a*b for a,b in remaining_pairs) print \u0026#34;Pete - \u0026#39;I know its: {}\u0026#39;\u0026#34;.format(set((a,b) for a,b in remaining_pairs if _prod_counts[a*b] == 1)) Running this we get:\nPete - \u0026lsquo;I don\u0026rsquo;t know\u0026rsquo; Sarah - \u0026lsquo;I don\u0026rsquo;t know\u0026rsquo; Pete - \u0026lsquo;I don\u0026rsquo;t know\u0026rsquo; Sarah - \u0026lsquo;I don\u0026rsquo;t know\u0026rsquo; Pete - \u0026lsquo;I don\u0026rsquo;t know\u0026rsquo; Sarah - \u0026lsquo;I don\u0026rsquo;t know\u0026rsquo; Pete - \u0026lsquo;I don\u0026rsquo;t know\u0026rsquo; Sarah - \u0026lsquo;I don\u0026rsquo;t know\u0026rsquo; Pete - \u0026lsquo;I know its: set([(2, 8)])\u0026rsquo;\nAnd in the case where Pete doesn\u0026rsquo;t know in round 5, Sarah also won\u0026rsquo;t know what the pair will be.\n","date":"6 April 2016","externalUrl":null,"permalink":"/posts/the-impossible-problem/","section":"Posts","summary":"","title":"Impossible Puzzle","type":"posts"},{"content":"The solution for this weeks FiveThirtyEight riddler [https://fivethirtyeight.com/features/should-you-pay-250-to-play-this-casino-game/] is pretty nutty.\nThe riddle is essentially:\nIf you drew random numbers from a uniform distribution from [0,1], what is the expected number of draws you would perform until their sum exceeds 1?\nSo let us define $$S_{n}$$ as the sum of the nth number drawn. Then the probability of the first sum $$S_{1}$$ being less than some value x is:\n$$P(S_{1}\\leq x) = x$$Then probability of the second sum $$S_{2}$$ being less than x can be calculated by taking $$P(S_{1}\\leq x)$$ as a conditional distribution on $$P(S_{2}\\leq x)$$ and calculating the marginal prob.:\n$$P(S_{2}\\leq x) =\\int_0^1{P(S_{1}\\leq x)}\\cdot 1\\ dx$$ $$ =\\int_0^1 x\\ dx$$ $$ =\\frac{x^2}{2}$$Probability of the 3rd is:\n$$P(S_{3}\\leq x) =\\int_0^1{P(S_{2}\\leq x)}\\cdot 1\\ dx$$ $$ =\\int_0^1 \\frac{x^2}{2}\\ dx$$ $$ =\\frac{x^3}{6}$$Extending to n:\n$$P(S_{n}\\leq x) =\\frac{x^n}{n!}$$Since we\u0026rsquo;re concerned with the sum exceeding $x=1$:\n$$P(S_{n}\\leq 1) =\\frac{1}{n!}$$To find the probability of exactly $n$ draws summing greater than 1, we have to find the probability of $n-1$\u0026rsquo;s sum exceeding one and subtract that off the probability of $n$\u0026rsquo;s sum exceeding.\n$$P(N\\ge n) = \\frac{1}{n-1!} - \\frac{1}{n!}$$Then we can calculate the expectation on that distribution:\n$$E[P(N\\ge n)] = \\sum_{n=2}^\\infty n \\cdot (\\frac{1}{n-1!} - \\frac{1}{n!})$$ $$ = \\sum_{n=2}^\\infty (\\frac{n}{n-1!} - \\frac{1}{n-1!})$$ $$ = \\sum_{n=2}^\\infty \\frac{1}{n-2!}$$ $$ = \\sum_{n=0}^\\infty \\frac{1}{n!}$$ $$ = e$$The expected number of draws you need to exceed 1 is e!\nThe convergence to e does make some intuitive sense. You\u0026rsquo;ll almost assuredly never bust on the first draw, so the minimum # of draws you\u0026rsquo;ll have to make is at least two. Since we\u0026rsquo;re drawing from a uniform distribution from [0,1], we have a mean of 1/2. 2x the mean gets us to 1, and 3x the mean moves us to 1.5, so a value in between 2-3 fits the bill.\n","date":"26 March 2016","externalUrl":null,"permalink":"/posts/casino-game/","section":"Posts","summary":"","title":"Casino Game","type":"posts"},{"content":"","date":"29 September 2015","externalUrl":null,"permalink":"/categories/basketball/","section":"Categories","summary":"","title":"Basketball","type":"categories"},{"content":"","date":"29 September 2015","externalUrl":null,"permalink":"/tags/basketball/","section":"Tags","summary":"","title":"Basketball","type":"tags"},{"content":"","date":"29 September 2015","externalUrl":null,"permalink":"/categories/nba/","section":"Categories","summary":"","title":"Nba","type":"categories"},{"content":"","date":"29 September 2015","externalUrl":null,"permalink":"/tags/nba/","section":"Tags","summary":"","title":"Nba","type":"tags"},{"content":"Since the 2013-14 season, the NBA has installed special cameras in every NBA stadium allowing teams to track every single player and ball movement, and digitize it into data that their data scientists can crunch into ever more sophisticated metrics on team and player performance.\nWhat\u0026rsquo;s even more amazing is that alot of this data is now publicly accessible via stats.nba.com.\nGrabbing the Data # So how do we get to the data? Screen scrapping? Nope! If you pull up the your browser\u0026rsquo;s debug console, and look at the network traffic, you\u0026rsquo;ll find that stats.nba.com has very kindly exposed some endpoints with for their Angular app. For example, all common info for Kobe Bryant for the 2013-14 season:\nhttp://stats.nba.com/stats/commonplayerinfo?LeagueID=00\u0026amp;PlayerID=977\u0026amp;SeasonType=Regular+Season\u0026amp;Season=2013-14 Response:\n{\u0026#34;resource\u0026#34;:\u0026#34;commonplayerinfo\u0026#34;,\u0026#34;parameters\u0026#34;:[{\u0026#34;PlayerID\u0026#34;:977},{\u0026#34;LeagueID\u0026#34;:\u0026#34;00\u0026#34;}],\u0026#34;resultSets\u0026#34;:[{\u0026#34;name\u0026#34;:\u0026#34;CommonPlayerInfo\u0026#34;,\u0026#34;headers\u0026#34;:[\u0026#34;PERSON_ID\u0026#34;,\u0026#34;FIRST_NAME\u0026#34;,\u0026#34;LAST_NAME\u0026#34;,\u0026#34;DISPLAY_FIRST_LAST\u0026#34;,\u0026#34;DISPLAY_LAST_COMMA_FIRST\u0026#34;,\u0026#34;DISPLAY_FI_LAST\u0026#34;,\u0026#34;BIRTHDATE\u0026#34;,\u0026#34;SCHOOL\u0026#34;,\u0026#34;COUNTRY\u0026#34;,\u0026#34;LAST_AFFILIATION\u0026#34;,\u0026#34;HEIGHT\u0026#34;,\u0026#34;WEIGHT\u0026#34;,\u0026#34;SEASON_EXP\u0026#34;,\u0026#34;JERSEY\u0026#34;,\u0026#34;POSITION\u0026#34;,\u0026#34;ROSTERSTATUS\u0026#34;,\u0026#34;TEAM_ID\u0026#34;,\u0026#34;TEAM_NAME\u0026#34;,\u0026#34;TEAM_ABBREVIATION\u0026#34;,\u0026#34;TEAM_CODE\u0026#34;,\u0026#34;TEAM_CITY\u0026#34;,\u0026#34;PLAYERCODE\u0026#34;,\u0026#34;FROM_YEAR\u0026#34;,\u0026#34;TO_YEAR\u0026#34;,\u0026#34;DLEAGUE_FLAG\u0026#34;],\u0026#34;rowSet\u0026#34;:[[977,\u0026#34;Kobe\u0026#34;,\u0026#34;Bryant\u0026#34;,\u0026#34;Kobe Bryant\u0026#34;,\u0026#34;Bryant, Kobe\u0026#34;,\u0026#34;K. Bryant\u0026#34;,\u0026#34;1978-08-23T00:00:00\u0026#34;,\u0026#34;Lower Merion HS (PA)\u0026#34;,\u0026#34;USA\u0026#34;,\u0026#34;Lower Merion HS (PA)/USA\u0026#34;,\u0026#34;6-6\u0026#34;,\u0026#34;212\u0026#34;,18,\u0026#34;24\u0026#34;,\u0026#34;Guard\u0026#34;,\u0026#34;Active\u0026#34;,1610612747,\u0026#34;Lakers\u0026#34;,\u0026#34;LAL\u0026#34;,\u0026#34;lakers\u0026#34;,\u0026#34;Los Angeles\u0026#34;,\u0026#34;kobe_bryant\u0026#34;,\u0026#34;1996\u0026#34;,\u0026#34;2014\u0026#34;,\u0026#34;N\u0026#34;]]},{\u0026#34;name\u0026#34;:\u0026#34;PlayerHeadlineStats\u0026#34;,\u0026#34;headers\u0026#34;:[\u0026#34;PLAYER_ID\u0026#34;,\u0026#34;PLAYER_NAME\u0026#34;,\u0026#34;TimeFrame\u0026#34;,\u0026#34;PTS\u0026#34;,\u0026#34;AST\u0026#34;,\u0026#34;REB\u0026#34;,\u0026#34;PIE\u0026#34;],\u0026#34;rowSet\u0026#34;:[[977,\u0026#34;Kobe Bryant\u0026#34;,\u0026#34;2014-15\u0026#34;,26.4,4.1,5.1,0.119]]}]} They even expose movement data for each \u0026ldquo;play\u0026rdquo; in a game.\nhttp://stats.nba.com/stats/locations_getmoments/?eventid={}\u0026amp;gameid={} Response:\n\u0026#34;{\u0026#34;moments\u0026#34;: [[1, 1415407240007, 719.12, 24.0, null, [[-1, -1, 48.30107, 33.09853, 10.3485], [1610612761, 200768, 76.01662, 25.29939, 0.0], [1610612761, 101161, 57.58085, 29.34887, 0.0], [1610612761, 201942, 50.654, 33.00764, 0.0], [1610612761, 203082, 51.8 (...)\u0026#34; Plotting Things # With this data we can do some cool things, like plot a density of plot of locations players tend to be. For example, lets look at this Wizards vs. Raptors game from Nov 7, 2014.\nDemar Derozan Kyle Lowry\nAs a point guard, his movement probably has alot of variability since he has to initiate the play. Amir Johnson\nAlot of movement down in the post, and mid-range areas as a power-forward should be.\nTerence Ross\nRoss loves his corner 3\u0026rsquo;s Lou Willams Jonas Valanciunas\nCenter\u0026rsquo;s have a pretty simple movement pattern, move from baseline to baseline ","date":"29 September 2015","externalUrl":null,"permalink":"/posts/nba-player-tracking/","section":"Posts","summary":"","title":"NBA Player Tracking","type":"posts"},{"content":"","date":"29 September 2015","externalUrl":null,"permalink":"/categories/sportsvu/","section":"Categories","summary":"","title":"Sportsvu","type":"categories"},{"content":"","date":"29 September 2015","externalUrl":null,"permalink":"/tags/sportsvu/","section":"Tags","summary":"","title":"Sportsvu","type":"tags"},{"content":"","date":"19 April 2015","externalUrl":null,"permalink":"/categories/quine/","section":"Categories","summary":"","title":"Quine","type":"categories"},{"content":"","date":"19 April 2015","externalUrl":null,"permalink":"/tags/quine/","section":"Tags","summary":"","title":"Quine","type":"tags"},{"content":"","date":"19 April 2015","externalUrl":null,"permalink":"/categories/tupper/","section":"Categories","summary":"","title":"Tupper","type":"categories"},{"content":"","date":"19 April 2015","externalUrl":null,"permalink":"/tags/tupper/","section":"Tags","summary":"","title":"Tupper","type":"tags"},{"content":"So here\u0026rsquo;s something cool. This formula:\n$$\\frac{1}{2} \u003c \\lfloor mod(\\lfloor\\frac{y}{17}\\rfloor2^{-17\\lfloor x \\rfloor -mod(\\lfloor y\\rfloor,17)},2)\\rfloor$$will literally plot itself if you look at the right region.\nwhere k is this 543-digit integer:\n960 939 379 918 958 884 971 672 962 127 852 754 715 004 339 660 129 306 651 505 519 271 702 802 395 266 424 689 642 842 174 350 718 121 267 153 782 770 623 355 993 237 280 874 144 307 891 325 963 941 337 723 487 857 735 749 823 926 629 715 517 173 716 995 165 232 890 538 221 612 403 238 855 866 184 013 235 585 136 048 828 693 337 902 491 454 229 288 667 081 096 184 496 091 705 183 454 067 827 731 551 705 405 381 627 380 967 602 565 625 016 981 482 083 418 783 163 849 115 590 225 610 003 652 351 370 343 874 461 848 378 737 238 198 224 849 863 465 033 159 410 054 974 700 593 138 339 226 497 249 461 751 545 728 366 702 369 745 461 014 655 997 933 798 537 483 143 786 841 806 593 422 227 898 388 722 980 000 748 404 719\nSo how exactly does this work? Let\u0026rsquo;s try to decipher what\u0026rsquo;s happening in the inequality.\nSaying the floor of something is greater than a half just means its greater than one:\n$$1 \\leq mod(\\lfloor\\frac{y}{17}\\rfloor2^{-17\\lfloor x \\rfloor -mod(\\lfloor y\\rfloor,17)},2)$$If we let $y = 17q + r$ where $0 \\leq r \u0026lt; 17$, the expression simplifies to:\n$$1 \\leq mod(\\frac{q}{2^{17x+r}},2)$$Asking if the modulo 2 of a number is greater than 1, amounts to asking if the floor of the number is odd, so the inequality we had before amounts to asking if $$\\lfloor mod(\\frac{q}{2^{17x+r}},2) \\rfloor$$ is odd.\nNow if you notice, our expression is dividing $q$ by a power of two, which if you\u0026rsquo;re familiar with binary arithmetic equates to asking if the $17x+r$th bit of q is a 1.\nSo $q$ is essentially just the bits of the image we want to display, and the inequality is just a way of mapping the bits of $q$ to positions on the graph $(x,r)$\nWith an understanding of how the formula works, we can derive a more general formula for an arbitrary resolution, and find an N for any image of our choosing.\n$$\\frac{1}{2} \u003c \\lfloor mod(\\lfloor\\frac{y}{H}\\rfloor2^{-H\\lfloor x \\rfloor -mod(\\lfloor y\\rfloor,H)},2)\\rfloor$$ convert a .png image file to a .bmp image file using PIL # from PIL import Image import numpy as np\ncol = Image.open(\u0026quot;/Users/benjaminyu/Pictures/snoo.png\u0026quot;) gray = col.convert(\u0026lsquo;L\u0026rsquo;) gray = gray.resize((128,int((float(gray.size[1])*float(128/float(gray.size[0]))))), Image.ANTIALIAS)\nLet numpy do the heavy lifting for converting pixels to pure black or white # bw = np.asarray(gray).copy()\nPixel range is 0\u0026hellip;255, 256/2 = 128 # bw[bw \u0026lt; 128] = 1 # White bw[bw \u0026gt;= 128] = 0 # Black N = int(\u0026quot;\u0026quot;.join(map(lambda x: str(x), np.ravel(np.flipud(bw.T)))),2) * bw.shape[0]\nprint N\nSo we can encode any arbitrary picture into a number N:\nN = 2934623275416912475030491109196223507870408304003274787529399398373409470947725212282507513985364121401993299590057603536651058434430428000713934034262420539588839714855313659476041329252205216284053036373985592156078116097862529167048970032268372106689275943029232884582183802275608406984310060487284077383545964976626049402237758337424825667882109775758119069759711086159475402546442908115604247479467761707595602823277222590174148061721144661562877935837684987331648950202379210419761418529491052519903672658483467127112354245170241000511651371573682684652138489833128947164860555689296831469585401404266519623158869788893484136525119276348088537397595057764763706428854503607822157361565041165736407127934729927206490929693840877341534651602519824327919988642906846941938143188348416246303450729216256492285763699631779017092550576801609733463569141992832078904210560389835678854985222553314212302157089647040770218576400856148712692549612864827859634848567874230553830231008608480551526185331610884023950473395627404510338247990509604688305502857258206169437219817544147597583712195213777653405268756727571883678689585163327871720739436240460669418300306624465337083345340464995381274757531302248790539716766397711852327302990770383523606908739248026364599355756834351397558487059050684928260928838432222518793427292433951013483978017470108122865208180548821988646986231703540291159558903789235893979713420832755414423733836281551853729042926352142592526095601876141580214105543791390293602903481652256017836803468399539265356189557558771422077488754514918578855210716867823312306187344566890350234802433750468728689432768555311875399950666919244002684268574753407813634761601994474156237457500418442033482936308192038219163087024314080549797579988103924908630687122751619154127395573557908674276686814377965651263181751997208887462721343142724195461589133490301517242186511008672593881507803523441566952187960237515387625734553156736149727199481694354773270128522179866731509657925460090139528874458275244429430892452942981756076496885441567514879027361692817842321464584459280173324141908504491156943594276771103722834076808201752686726085560071885872984223087682696920575570456442655477225039976727495199911238224829072394783543050873150280650591819431533272367505877158020727584326980883477180928874605391350817024759871309404085387989405329715957150324979071535589966696213464423087118981480999105097535293942661806511503593473513724116877585440751185516253755737795162786017812931574161592606767535714074215598821686593293532932960264437635558765976389576877509305340221758973065089196415837498445497408525156489886212813684243724323725004685820770019255422439909930727267561273169067878674794273010056125234113530987330376846261079028270380444771832099462250359430408276398220720074274559637984804760436078298796343035087959362643005440997645523579810464973937126682694540083574858141767508134773227942068674148031670131717931418689152363586500900535680051127415533991251403919548661081834899052099337098194392197384935184663026546262344068986801756366808527194969649049755136712360954774314964625405934678972712839590470578211817546424938634849289847854768974075481629606601073700446951475887076345447295918394091512682194830312520818996687689385505716564672250993308451896798645087794301454736179531933752575980180053095262026112430109553088168674158571212573903619901314636516438376802721176516985088450078634272548208727838394836195269009603161045007845222625981683707388914297613369778954556464969666114729890039816788008611987751149284082746550112766208312052209559817611969588323556404633247069961729801774315695724440062396745250675401936097773795213305336722643233740567985176560174300445832865640238382675373274271037606039858106682894696450279221097336576593887033982977033445050701383492878778355548463657547230933917003412117674221164036961378317484903511666429985992600722135242153678097475445447559103223641494220495969919597615352258710185128390228610549474981196081491816244219184591728957745644815545422896276054189162468951290185357853361610430259087161986486053419230293766230560812427982806743750137561941534672666011438561263643205387915597505501012317896658902435472712920787093616146812846754247561670673098256668411055723483723692084567264275093528043826421642341526643798201364109953265902247797332233881512376876905775491832541580680462110631962411604960776572987246841750575786400748930967868158655063870179156433850287070223190824787769375734172851199964269451035740979289566079882401273223261310067061915476632854584196707898910527479163663645295174286326363519525815572197529728287375600807852619655034597605353867924297140123622805070381944205650513716454290134805832443444451157184311419910156868409289554683807983110249242741375974240359369882359906025774629741738831739082242574243835891288583403863249591034478333038474016980047411831632234201557906326705010670843157572860789962076004094863029562686271324313636041475731335624439151784655947259452318740012832109494358173866605847931029270764452184619298435460341718297701175482814567128098509020623654863739197228539946449925577892594667783194019174140177224053906492027814008479465063700700820148751768139656986476815902597185582146284727070605978078894511823945488348445276827366156670303755507474115433582411261056679262368762751023162449086431759226266531174049758164577352632233910284433758042743981099850816544790191658943184252792707349325162613913695560734979746878302087927316458577289480139955269882212245203299184694125165342733137841349671020199734260111420186534472078196943004821011381558040042661294167455749340816356028306882591078444373681567241740290605566131305020533059557126560715499839884768296019624488670465052971862717186589417628954021185483642208527902959990388260257883919179452013110812992396384969965573333053950576661974326883712785363315544323913073109445836071572347480037116995200791994392602132754118135407974028778561916293468963500728098108327821644068498654395825016124076034162558054052345352347491282610738480134818886509558024740458088584546601999237568158790446312078354699975979358000396979699037944957751572234892616739917843244132366528868317702223416594649750526197990843458577361927168473772498652415698180679926392669938648987073048836151249093833335193839379728878750036316072143758592955863478336095411405652084454602034006507413616883789500875620245129785602008478364101446537874787569975931670692042282785420068746656178574460092482966788307063453556499807124890274331421456245864188978294849283300156356709334733056597424787335569463024929057934816845299878638415355316969026583608608203385988139253070906177042047230251597354265667817279953753889216339598949104763900051731096838710295656982208543112900311584701675361794925082663270423598807715933774020079490898414570864642092474729075048448L\nAnd now plotting it:\nH = bw.shape[0] W = bw.shape[1]\ndef tupper(x,y): return 0.5 \u0026lt; ((y//H) // (2**(H*x + y%H))) % 2\nplt.rc(\u0026lsquo;patch\u0026rsquo;, antialiased=False) for x in xrange(W): for yy in xrange(H): y = N + yy if tupper(x,y): plt.bar(left=x, bottom=yy, height=1, width=1, linewidth=0, color=\u0026lsquo;black\u0026rsquo;)\nplt.axis(\u0026lsquo;scaled\u0026rsquo;)\nReferences Wikipedia-Tupper\u0026rsquo;s self-referential formula [https://en.wikipedia.org/wiki/Tupper%27s_self-referential_formula]\nHow does Tupper\u0026rsquo;s self-referential formula work [https://shreevatsa.wordpress.com/2011/04/12/how-does-tuppers-self-referential-formula-work/]\n","date":"19 April 2015","externalUrl":null,"permalink":"/posts/tuppers-self-referential-formula/","section":"Posts","summary":"","title":"Tupper's self-referential formula","type":"posts"},{"content":"","date":"7 April 2015","externalUrl":null,"permalink":"/categories/games/","section":"Categories","summary":"","title":"Games","type":"categories"},{"content":"","date":"7 April 2015","externalUrl":null,"permalink":"/tags/games/","section":"Tags","summary":"","title":"Games","type":"tags"},{"content":"","date":"7 April 2015","externalUrl":null,"permalink":"/categories/markov/","section":"Categories","summary":"","title":"Markov","type":"categories"},{"content":"","date":"7 April 2015","externalUrl":null,"permalink":"/tags/markov/","section":"Tags","summary":"","title":"Markov","type":"tags"},{"content":"I\u0026rsquo;m not sure why I find this so surprising, but it\u0026rsquo;s a somewhat trivial task to write a program that can spew out somewhat coherent sentences, as long as you have a large corpus to work off.\nMarkov Chains\nMarkov Chains are essentially state machines with probabilities assigned to each of it\u0026rsquo;s state transitions.\nFor example, using the Markov Chain shown above, if it were sunny today, there would be a 70% chance it would be rainy tomorrow, and a 30% chance it would still be sunny. If one were to continue to traverse the chain, they would get a sequence of weather patterns like:\n'sunny', 'rainy', 'rainy', 'sunny', ... If the probabilities were reasonable, the sequence of weather patterns should be reflect your 7-day weather forecast!\nThe Bot\nWe can apply this same methodology to generate some tweets! First we need some source material. I had scraped around 110,000 game reviews off Metacritic for another project I was working on.\nFirst we need to split up our corpus into individual tokens. We could split it up at several granularities, like per-character, per-word, or even per-sentence. As with most things, there\u0026rsquo;s a trade-off on both ends. Tokenize by character, and it\u0026rsquo;s you\u0026rsquo;ll get nonsensical gibberish. If you split by sentence, you\u0026rsquo;re essentially just chaining together pieces from your source material. Let\u0026rsquo;s try at the word level for now:\n\u0026#34;\u0026#34;\u0026#34; Using NLTK for convenience. You can use split() ¯\\_(ツ)_/¯ \u0026#34;\u0026#34;\u0026#34; tokenizer = nltk.tokenize.RegexpTokenizer(r\u0026#39;\\w+|[^\\w\\s]+\u0026#39;) tokenized_content = tokenizer.tokenize(review_file.read()) Now that we\u0026rsquo;ve tokenized our corpus, we need to make a decision on how we wan\u0026rsquo;t to represent each state. We can use a single word/token as the state:\nto, be, or, not, to, be, … We can make it a bit more complex and do each pair of words:\nto be, be or, or not, not to, to be, … And pushing it even further, triples:\nto be or, be or not, or not to, not to be, … The technical term for this grouping of tokens is n-grams. Picking the correct n will impact the performance of your model. Pick an extremely large n, and your model will be very biased towards certain sequences. Pick too small an n and your model will spew out crap, since there\u0026rsquo;s so much variability. Tri-grams are usually a good bet for decent sized corpuses, but for smaller ones bi-grams perform better.\nNow that we\u0026rsquo;ve decided on what the states will look like in our Markov Chain, how do we go about representing it in a data structure? One simple way is to use a Hash Table/Dictionary where the keys are states in the Markov Chain, and the key\u0026rsquo;s value are the possible transitions, represented by an array of keys (which assumes that each transistion has a uniform probability.\ndef train(self): for w1, w2, w3 in self.triplets(tokenized_content): key = (w1, w2) if key in self.model: self.model[key].append(w3) else: self.model[key] = [w3] Now that we have our Markov Chain ready to go, we\u0026rsquo;ll just need to start off at some random state, and traverse the chain to generate some tweets!\ndef generate_tweet(self): w1, w2 = random.choice(self.model.keys()) gen_words = [] tweet_length = 0 while tweet_length \u0026lt;= 100: gen_words.append(w1) tweet_length += len(w1) + 1 w1, w2 = w2, random.choice(self.model[(w1, w2)]) gen_words.append(\u0026#39;#GameReview\u0026#39;) return reduce(self.join_tokens, gen_words) The resulting tweets:\nTweets by @bot_review\nSource can be found here: https://gist.github.com/ben-yu/919e843ac4df8d0fccee\n","date":"7 April 2015","externalUrl":null,"permalink":"/posts/markov-models/","section":"Posts","summary":"","title":"Tweeting with Markov Chains","type":"posts"},{"content":"","date":"7 April 2015","externalUrl":null,"permalink":"/categories/twitter/","section":"Categories","summary":"","title":"Twitter","type":"categories"},{"content":"","date":"7 April 2015","externalUrl":null,"permalink":"/tags/twitter/","section":"Tags","summary":"","title":"Twitter","type":"tags"},{"content":"Here\u0026rsquo;s something mind-blowing:\nIf you take the sum of the first couple of natural numbers you get the triangle numbers:\n$$T_{1} = 1$$\n$$T_{2} = 1 + 2$$\n$$T_{3} = 1 + 2 + 3$$\n$$T_{n} = (n)(n-1)/2$$\nSo essentially the series 1 + 2 + 3 + 4 + \u0026hellip; is the area of an nxn triangle.\nSo we let the series be:\n$$ c = 1 + 2 + 3 + 4 + ... $$\nBy multiplication:\n$$ 4c = 4 + 8 + 12 + 16 + ... $$If we then insert some zeros:\n$$ 4c = 0 + 4 + 0 + 8 + 0 + ... $$Then take the difference between the two we get:\n$$ - 3c = 1 - 2 + 3 - 4 + 5 - 6 + 7 ... $$So what does this new series sum up to? We know the geometric series:\n$$ \\sum\\limits_{n=0}^\\infty r^n = 1 + r + r^2 + r^3 + ... = 1/(1-r) $$Taking the derivative with Quotient Rule on the RHS and power rule on the left: $$ \\sum\\limits_{n=0}^\\infty nr^{n-1} = 1/(1-r)^2 $$With r=-1:\n$$ 1/(1+1)^2 = \\sum\\limits_{n=0}^\\infty n(-1)^{n-1}$$ $$ 1/4 = 1 - 2 + 3 - 4 + 5 - 6 + 7 ...$$So from earlier we get:\n$$ -3c = 1/4 $$ $$ c = -1/12 $$So\n$$ 1 + 2 + 3 + 4 + 5 + ... = -1/12 $$ Of course, there was something wonky with the derivation we just did. Randomly inserting zeros into a divergent series to match the subtraction will cause inconsistent results if you place them in different locations. There are ways to handle that with by constraining where the zero\u0026rsquo;s get placed by depending on another function.\nMath is weird.\n","date":"7 March 2015","externalUrl":null,"permalink":"/posts/1-2-3/","section":"Posts","summary":"","title":"1 + 2 + 3 + ...","type":"posts"},{"content":"What better place to get sushi in Tokyo, than by going to an actual fish market? By the time we arrived at Tsukiji Fish Market (around 6:00AM in the morning), the lines were already pretty long. Daiwa Sushi (大和) had a significantly shorter wait. We probably only waited for around 20 minutes before we were seated.\nWe decided on the omakase which was seven pieces of nigiri, 6 pieces of maki for ¥3500.\nOtoro (Fatty Tuna) # Squid # Shrimp # Kanpachi (Amberjack) # Uni # Chutoro (Med. Fatty Tuna) # Ikura (Salmon Roe) and Akami (Tuna) rolls # Tamagoyaki (egg) # Shrimp Head # Anago (sea eel) # My favourite piece was either the Anago or the Shrimp Head. The quality of the fish was amazing. The Ikuro was suprisingly good, since it didn\u0026rsquo;t have the fishy taste you get so much roe imported here in Canada.\nProbably not the best sushi you could find in Tokyo, but it was pretty damn close.\n","date":"6 July 2014","externalUrl":null,"permalink":"/posts/da-he/","section":"Posts","summary":"","title":"大和","type":"posts"},{"content":"","date":"16 June 2014","externalUrl":null,"permalink":"/categories/hanoi/","section":"Categories","summary":"","title":"Hanoi","type":"categories"},{"content":"","date":"16 June 2014","externalUrl":null,"permalink":"/tags/hanoi/","section":"Tags","summary":"","title":"Hanoi","type":"tags"},{"content":"","date":"16 June 2014","externalUrl":null,"permalink":"/categories/towers/","section":"Categories","summary":"","title":"Towers","type":"categories"},{"content":"","date":"16 June 2014","externalUrl":null,"permalink":"/tags/towers/","section":"Tags","summary":"","title":"Towers","type":"tags"},{"content":"An age-old puzzle that\u0026rsquo;s challenged kids and pancake chefs for ages:\nThe goal of the puzzle is to move an entire stack of disks from one rod/plate to another with the following restrictions:\nOnly one disk can be moved at a time. Each move consists of taking the upper disk from one of the stacks and placing it on top of another stack No disk may be placed on top of a smaller disk. So how can we go about solving this programatically? The key insight is that this problem has optimal substructure. This means that the solution for a larger problem, say for 4 disks is composed of the solutions for smaller subproblems: 1,2 and 3 disks.\ndef moveTower(height,fromPole, toPole, withPole): if height \u0026gt;= 1: moveTower(height-1,fromPole,withPole,toPole) moveDisk(fromPole,toPole) moveTower(height-1,withPole,toPole,fromPole) def moveDisk(fp,tp): print(\u0026#34;moving disk from\u0026#34;,fp,\u0026#34;to\u0026#34;,tp) moveTower(3,\u0026#34;A\u0026#34;,\u0026#34;B\u0026#34;,\u0026#34;C\u0026#34;) We first move the first n-1 disks from pole A to B by calling moveTower recursively but for the stack of n-1 disks. Then we can move disk n from pole A to C. We can then do the same recurisive call and move the n-1 disks from B to C.\nThe recursive solution seems a bit expensive, with the two recursive calls leading to a $O(2^n)$ runtime. Can we do a bit better?\nTop-Down Memoization # Our recursive search right now is solving the same subproblem over and over again. If we saved or memoized the result, we could possibly trade off some of our runtime by using more memory and essentially caching the solutions to each subproblem.\nmemo = [] lookup = [\u0026#34;ABC\u0026#34;,\u0026#34;ACB\u0026#34;,\u0026#34;BAC\u0026#34;,\u0026#34;BCA\u0026#34;,\u0026#34;CAB\u0026#34;,\u0026#34;CBA\u0026#34;] def topDown(height,fromPole, toPole, withPole): global memo memo = [[-1 for x in range(6)] for y in range(height+1)] memo[0][0] = \u0026#34;\u0026#34; for x in range(6): memo[0][x] = \u0026#34;\u0026#34; return topDownHelper(height,fromPole, toPole, withPole) def topDownHelper(height,fromPole, toPole, withPole): global memo move = fromPole + toPole + withPole index = 0 while move != lookup[index]: index += 1 if memo[height][index] \u0026lt; 0: memo[height][index] = topDownHelper(height-1,fromPole,withPole,toPole)\\ + fromPole +\u0026#39;-\u0026gt;\u0026#39;+ toPole +\u0026#39;, \u0026#39; \\ + topDownHelper(height-1,withPole,toPole,fromPole) return memo[height][index] print topDown(3,\u0026#34;A\u0026#34;,\u0026#34;B\u0026#34;,\u0026#34;C\u0026#34;) Bottom-up Dynamic Programming # Another approach would be to intelligently build up our solution, solving for 1 disk, then 2 disks, etc\u0026hellip; By going in the opposite direction of our recursive solution, we save time by not solving the same subproblems. This is a technique dubbed dynamic programming. Since we are building up our solution successively, we only need to remember the solutions that are needed immediately. This saves us some valuable space.\nmoves = [\u0026#34;A-\u0026gt;B\u0026#34;,\u0026#34;A-\u0026gt;C\u0026#34;,\u0026#34;B-\u0026gt;A\u0026#34;,\u0026#34;B-\u0026gt;C\u0026#34;,\u0026#34;C-\u0026gt;A\u0026#34;,\u0026#34;C-\u0026gt;B\u0026#34;] prevLookup = [1,0,3,2,5,4] prev2Lookup = [5,3,4,1,2,0] def bottomUp(height,fromPole, toPole, withPole): prev = [\u0026#34;\u0026#34; for x in range(6)] curr = [\u0026#34;\u0026#34; for x in range(6)] for i in range(height): for j in range(len(moves)): prevStr = prev[prevLookup[j]] + \u0026#39;, \u0026#39; if prev[prevLookup[j]] != \u0026#34;\u0026#34; else \u0026#34;\u0026#34; nextStr = \u0026#39;, \u0026#39; + prev[prev2Lookup[j]] if prev[prev2Lookup[j]] != \u0026#34;\u0026#34; else \u0026#34;\u0026#34; curr[j] = prevStr + moves[j] + nextStr curr,prev = prev,curr return prev print bottomUp(4,\u0026#34;A\u0026#34;,\u0026#34;B\u0026#34;,\u0026#34;C\u0026#34;)[0] Performance # Doing a quick check on each implementation\u0026rsquo;s running time, we see that the recursive solution explodes exponentially, while our memoized/dynamic programming implementations grow linearly. Clearly dynamic programming is the way to go!\n","date":"16 June 2014","externalUrl":null,"permalink":"/posts/towers-of-hanoi/","section":"Posts","summary":"","title":"Towers of Hanoi","type":"posts"},{"content":"The dragon curve or Heighway Dragon is a beautiful self-similar fractal first investigated by NASA physicists John Heighway, Bruce Banks, and William Harter.\nJack [Heighway] came into my office (actually cubicle) and said that if you folded a one dollar bill repeatedly he thought it would make a random walk or something like that. (We’d been arguing about something in Feller’s book on return chances.) I was dubious but said “Let’s check it out with a big piece of paper.” (Those were the days when NASA could easily afford more than one dollar’s worth of paper.) Well, it made a funny pattern alright but we couldn’t really see it too clearly. So one of us thought to use tracing paper and “unfold” it indefinitely so we could record (tediously) as big a pattern as we wanted. But each time we made the next order, it just begged us to make one more!\nAs they folded, and unfolded, a pattern emerged: When you made a single fold you get a single right turn:\nRight After a second fold, you would get:\nRight Right Left And the third:\nRight Right Left Right Right Left Left Fourth:\nRight Right Left Right Right Left Left Right Right Right Left Left Right Left Left The sequence of turns are actually following a specific pattern:\n$$S\\_{n+1} = S\\_n R \\bar{S\\_n}$$where $\\bar{S}$ is the same sequence but reversed in direction and you replace L\u0026rsquo;s with R\u0026rsquo;s and vice-versa. The $n+1$\u0026lsquo;th fold appends the $R$ then the subsequent folds are the same sequence but reversed in order and direction.\nThe animation on the sidebar uses this sequence to iteratively trace out the curve:\nfunction dragonCurveIter(seq,iterations) { // 0 - L , 1 - R for(var i = 0; i \u0026lt; iterations; i++) { var a = seq.reverse().map(function(x){return Number(!x)}) seq.reverse().push(1); seq = seq.concat(a); } return seq } Interestingly enough, this recursive definition can also be interpreted geometrically. Appending $R\\bar{S_n}$ is equivalent to having the original curve rotated by 90 degrees about the endpoint of the curve.\n","date":"4 June 2014","externalUrl":null,"permalink":"/posts/dragon-curve/","section":"Posts","summary":"","title":"Dragon Curve","type":"posts"},{"content":"","date":"4 June 2014","externalUrl":null,"permalink":"/categories/fractals-javascript/","section":"Categories","summary":"","title":"Fractals Javascript","type":"categories"},{"content":"","date":"4 June 2014","externalUrl":null,"permalink":"/tags/fractals-javascript/","section":"Tags","summary":"","title":"Fractals Javascript","type":"tags"},{"content":"MTA provides a public dataset of turnstile data per station since May 2010. We can grab weather data from Weather Underground, who kindly provides historical weather data in CSV format. You can also grab the same data through their API.\nData Munging # The data we collected so far usuable yet, so we\u0026rsquo;ll have to do a bit of data munging. The MTA data has multiple entries per row, so we should first unwind the data so we get an entry per row:\nimport csv import os with open(\u0026#34;masterfile.csv\u0026#34;, \u0026#39;wb\u0026#39;) as outfile, open(\u0026#34;data/maynycweather.csv\u0026#34;, \u0026#39;wb\u0026#39;) as weather: outWriter = csv.writer(outfile) outfile.write(\u0026#39;C/A,UNIT,SCP,DATEn,TIMEn,\\ DESCn,ENTRIESn,EXITSn\\n\u0026#39;) # column names for name in os.listdir(\u0026#39;./turnstile\u0026#39;): with open(\u0026#39;./turnstile/\u0026#39; + name, \u0026#39;rb\u0026#39;) as infile: inReader = csv.reader(infile) for row in inReader: for i in xrange(3,len(row),5): outWriter.writerow(row[0:3] + row[i:i+5]) For simple manipulations, the csv module will usually suffice. However, doing more complicated operations, such as calculating aggregates or dealing with different datatypes like DateTimes can be difficult and tedious. This is where pandas saves the day!\nimport pandas import datetime def reformat_weather_dates(date): return datetime.datetime.strptime(date,\\ \u0026#39;%Y-%m-%d\u0026#39;).strftime(\u0026#39;%Y-%m-%d\u0026#39;) def reformat_subway_dates(date): return datetime.datetime.strptime(date,\\ \u0026#39;%m-%d-%y\u0026#39;).strftime(\u0026#39;%Y-%m-%d\u0026#39;) df = pandas.read_csv(\u0026#39;masterfile.csv\u0026#39;) dfweather = pandas.read_csv(\u0026#39;data/nyc052013.csv\u0026#39;) df = df[df[\u0026#39;DESCn\u0026#39;] == \u0026#39;REGULAR\u0026#39;] # Filter by REGULAR df[\u0026#39;ENTRIESn_hourly\u0026#39;] = (df[\u0026#39;ENTRIESn\u0026#39;] - df[\u0026#39;ENTRIESn\u0026#39;].shift(1)).fillna(1) # Calculate daily entries dfweather.rename(columns={\u0026#39;EDT\u0026#39;:\u0026#39;DATEn\u0026#39;}, inplace=True) # - rename column names so they can merge df[\u0026#39;DATEn\u0026#39;] = df[\u0026#39;DATEn\u0026#39;].map(reformat_subway_dates) # - reformat so dates match using map dfweather[\u0026#39;DATEn\u0026#39;] = dfweather[\u0026#39;DATEn\u0026#39;].map(reformat_weather_dates) final = pandas.merge(df,dfweather,on=\u0026#39;DATEn\u0026#39;) final.to_csv(\u0026#39;final_master.csv\u0026#39;) We can easily filter by specific categories. For example, we\u0026rsquo;ll only want turnstile data from the category of \u0026lsquo;Regular\u0026rsquo;:\ndf = df[df[\u0026#39;DESCn\u0026#39;] == \u0026#39;REGULAR\u0026#39;] # Filter by REGULAR This grabs all the indices of rows that match \u0026lsquo;REGULAR\u0026rsquo;, then regrabs the rows from the original frame.\n# Calculate daily entries df[\u0026#39;ENTRIESn_hourly\u0026#39;] = (df[\u0026#39;ENTRIESn\u0026#39;] - df[\u0026#39;ENTRIESn\u0026#39;].shift(1)).fillna(1) Exploring the Data # One way to get a sense of our data is to visualize it. Here, we\u0026rsquo;ll try out ggplot, a port of the popular R graphing library.\nEntries seem to peek during specific two-hour windows. This seems to correspond to peak operating times, like rush hour (8-9am and 4-5pm), lunch and dinner etc\u0026hellip;\nWe can also look at comparing ridership relative to weather, like how many people exit stations relative to the average humidity.\nfrom pandas import * from ggplot import * df = pandas.read_csv(\u0026#39;./turnstile_data_master_with_weather.csv\u0026#39;) df[\u0026#39;meandewpti\u0026#39;] = df[\u0026#39;meandewpti\u0026#39;].map(lambda x: round((x-32.0)*(5.0/9.0),0)) daily = df.groupby(df.meandewpti).EXITSn_hourly.sum() daily.index.name = \u0026#39;day\u0026#39; daily = daily.reset_index() p = ggplot(daily, aes(\u0026#39;day\u0026#39;, weight=\u0026#39;EXITSn_hourly\u0026#39;,alpha=0.5)) + \\ geom_bar(fill=\u0026#34;green\u0026#34;) + \\ theme_xkcd() + \\ ggtitle(\u0026#34;May 2011 - Turnstile Exits by Dew Point\u0026#34;) + \\ xlab(\u0026#34;Degrees Celsius\u0026#34;) + \\ ylab(\u0026#34;# of Exits\u0026#34;) print p Don\u0026rsquo;t think we can infer much from the plot, except most the most popular days in May had a humidity around 14-16°C.\n","date":"19 May 2014","externalUrl":null,"permalink":"/posts/nyc-subway-data-analysis-with-python/","section":"Posts","summary":"","title":"Analyzing NYC Subway Data with Python","type":"posts"},{"content":"","date":"19 May 2014","externalUrl":null,"permalink":"/categories/ggplot/","section":"Categories","summary":"","title":"Ggplot","type":"categories"},{"content":"","date":"19 May 2014","externalUrl":null,"permalink":"/tags/ggplot/","section":"Tags","summary":"","title":"Ggplot","type":"tags"},{"content":"","date":"19 May 2014","externalUrl":null,"permalink":"/categories/pandas/","section":"Categories","summary":"","title":"Pandas","type":"categories"},{"content":"","date":"19 May 2014","externalUrl":null,"permalink":"/tags/pandas/","section":"Tags","summary":"","title":"Pandas","type":"tags"},{"content":"I\u0026rsquo;m a software engineer turned Engineering Manager based in San Francisco. I\u0026rsquo;ve built software across a bunch of different industries over the years: e-commerce, healthcare, logistics and now Twitch!\nOutside of work I\u0026rsquo;m usually training for my next marathon (chasing a Boston qualifier 💪), cooking something that didn\u0026rsquo;t need to be nearly this complicated, learning how to paint, or going deep on a machine learning paper. I have a master\u0026rsquo;s in CS with an ML focus, a soft spot for Japanese literature, and a Pokémon card collection I choose not to explain.\nYou can find my code on GitHub and my machine learning experiments on Hugging Face.\n","date":"7 March 2014","externalUrl":null,"permalink":"/about/","section":"Ben Yu","summary":"","title":"Hey, I'm Ben 👋","type":"page"},{"content":"","date":"17 July 2013","externalUrl":null,"permalink":"/categories/coffeescript/","section":"Categories","summary":"","title":"Coffeescript","type":"categories"},{"content":"","date":"17 July 2013","externalUrl":null,"permalink":"/tags/coffeescript/","section":"Tags","summary":"","title":"Coffeescript","type":"tags"},{"content":"","date":"17 July 2013","externalUrl":null,"permalink":"/categories/node/","section":"Categories","summary":"","title":"Node","type":"categories"},{"content":"","date":"17 July 2013","externalUrl":null,"permalink":"/tags/node/","section":"Tags","summary":"","title":"Node","type":"tags"},{"content":"Inspired by this impressive piece of engineering [https://github.com/mame/quine-relay], I tried to write my one quine in CoffeeScript.\nThe trick I used was to encode the entire file into decimal ASCII and store it in the data array. To rebuild the actual code, you convert it back to characters. This lets you store the code, and run it too!\n","date":"17 July 2013","externalUrl":null,"permalink":"/posts/quine/","section":"Posts","summary":"","title":"Quine","type":"posts"},{"content":"Visualizations of Advent of Code 2021 challenge. Code on Github\nDay 11 - Flashing Octopuses (Strobe Warning) ","date":"29 December 2012","externalUrl":null,"permalink":"/posts/advent-of-code-2021/","section":"Posts","summary":"","title":"Advent of Code 2021","type":"posts"},{"content":"","date":"29 December 2012","externalUrl":null,"permalink":"/categories/data-viz/","section":"Categories","summary":"","title":"Data Viz","type":"categories"},{"content":"","date":"29 December 2012","externalUrl":null,"permalink":"/tags/data-viz/","section":"Tags","summary":"","title":"Data Viz","type":"tags"},{"content":"","date":"29 December 2010","externalUrl":null,"permalink":"/posts/brothers-1/","section":"Posts","summary":"","title":"Brothers #1","type":"posts"},{"content":" Experimenting with different forms of repetition ","date":"29 December 2010","externalUrl":null,"permalink":"/posts/genuary-2021/","section":"Posts","summary":"","title":"Genuary 2021","type":"posts"},{"content":" ","date":"29 December 2010","externalUrl":null,"permalink":"/posts/harmonographs/","section":"Posts","summary":"","title":"Harmonographs","type":"posts"},{"content":"","date":"29 December 2010","externalUrl":null,"permalink":"/posts/mario-levels/","section":"Posts","summary":"","title":"Mario Levels","type":"posts"},{"content":"","date":"29 December 2010","externalUrl":null,"permalink":"/posts/procedural-fern/","section":"Posts","summary":"","title":"Procedural Fern","type":"posts"},{"content":" ","date":"29 December 2010","externalUrl":null,"permalink":"/posts/shapes/","section":"Posts","summary":"","title":"Shapes","type":"posts"},{"content":"","date":"29 December 2010","externalUrl":null,"permalink":"/posts/skull-still-life-2019/","section":"Posts","summary":"","title":"Skull Still Life - 2019","type":"posts"},{"content":"","date":"29 December 2010","externalUrl":null,"permalink":"/posts/visionary/","section":"Posts","summary":"","title":"Visionary","type":"posts"},{"content":"","date":"29 December 2010","externalUrl":null,"permalink":"/posts/wine-still-life-2019/","section":"Posts","summary":"","title":"Wine Still Life - 2019","type":"posts"},{"content":"","date":"29 December 1992","externalUrl":null,"permalink":"/categories/bourdain/","section":"Categories","summary":"","title":"Bourdain","type":"categories"},{"content":"","date":"29 December 1992","externalUrl":null,"permalink":"/tags/bourdain/","section":"Tags","summary":"","title":"Bourdain","type":"tags"},{"content":"My riff on a recipe from Anthony Bourdain’s Appetites cookbook:\nHalibut Recipe:__ # 2 halibut fillets (about 12 ounces each; ask your fishmonger to remove the white belly skin but to leave the dark dorsal skin attached) 1 quart rendered duck fat (available at variousgourmet retailers and some butcher shops) 1 lemon 1 tablespoon canola or other neutral oil 1 tablespoon fennel seeds Seeds from 2 cardamom pods 1 bay leaf 4 garlic cloves, peeled and sliced Salt and freshly ground black pepper to taste Salad Recipe:__ # 2 cups fresh or frozen sweet corn 3/4 cup chopped tomato 1/4 cup chopped onion 3/4 cup red wine vinagrette Directions: # Using the microplane grater, finely grate the lemon zest into a small mixing bowl and add the oil, fennel and cardamom seeds, bay leaf and garlic, mixing well. Rub the fish on all sides with the mixture and refrigerate in a casserole or zip-seal plastic bag for at least 2 hours and up to 24. Remove the fish from the refrigerator about 15 minutes before you’re ready to poach it. Brush off the excess garlic and seeds. Season it on all sides with salt and pepper. In a large, heavy-bottom pot, heat the duck fat over medium heat until it reaches 150 degrees F, monitoring the temperature with the instant-read thermometer. Slip the fish into the pot and ladle the fat over so it is submerged. Let cook for 5 minutes, then remove from the heat, cover, and let sit for 10 to 15 minutes, until the fish has an internal temperature of 150 degrees F. Carefully remove the fish from the pot with a slotted spoon or fish spatula, adjust seasoning if necessary. In a large bowl, combine vegetables; stir in dressing. Cover and refrigerate until serving. Plate Halibut with Salad and Serve ","date":"29 December 1992","externalUrl":null,"permalink":"/posts/poached-halibut/","section":"Posts","summary":"","title":"Halibut Poached in Duck Fat with Corn Salad","type":"posts"},{"content":"","date":"29 December 1991","externalUrl":null,"permalink":"/categories/baking/","section":"Categories","summary":"","title":"Baking","type":"categories"},{"content":"","date":"29 December 1991","externalUrl":null,"permalink":"/tags/baking/","section":"Tags","summary":"","title":"Baking","type":"tags"},{"content":"","date":"29 December 1991","externalUrl":null,"permalink":"/posts/beef-wellington/","section":"Posts","summary":"","title":"Beef Wellington","type":"posts"},{"content":"Anthony Bourdain\u0026rsquo;s Recipe from Appetites: A Cookbook (page 111)\n","date":"29 December 1991","externalUrl":null,"permalink":"/posts/duck-rillettes/","section":"Posts","summary":"","title":"Duck Rillettes","type":"posts"},{"content":" ","date":"29 December 1991","externalUrl":null,"permalink":"/posts/ramen-noodles/","section":"Posts","summary":"","title":"Homemade Ramen Noodles","type":"posts"},{"content":"Came out a bit dry, but the sauce was to die for.\nIngredients [serves 4]\n4 boneless pork rib chops or cutlets (about 6 ounces each) ¼ cup soy sauce ¼ cup Chinese rice wine ¼ cup black vinegar 1 tablespoon sesame oil 4 garlic cloves, peeled and coarsely chopped 1 tablespoon five-spice powder 1 tablespoon dark brown sugar, packed 1 large egg ½ cup all-purpose flour 1½ cups panko bread crumbs Salt and freshly ground black pepper to taste 2 cups peanut oil, for frying, plus more as needed 8 slices white sandwich bread Chili paste, for garnish Special Equipment\nMeat mallet or heavy-duty rolling pin Sheet pan or platter lined with newspaper Directions\nPound the pork to ¼-inch thickness, using the meat mallet. If using a rolling pin, be sure to wrap the meat in plastic before whacking it (and consider getting yourself a meat mallet).\nIn a small mixing bowl, whisk together the soy sauce, rice wine, vinegar, sesame oil, garlic, five-spice powder, and sugar. Place the pork in a zip-seal plastic bag or nonreactive container and pour the marinade mixture over, turning the chops to ensure that they’re evenly coated with liquid. Seal the bag and refrigerate for at least 1 hour and up to 12 hours.\nRemove the chops from the marinade and brush off the garlic. Beat the egg in a shallow bowl and place the flour and bread crumbs in separate shallow bowls. Season the flour with salt and pepper. You may need to add a tablespoon of water to the beaten egg, to loosen its texture so that it adheres evenly to the meat.\nTo a large, heavy-bottom frying pan, add the peanut oil and heat over medium-high.\nWhile the oil heats, dredge the chops in the flour, batting off any extra, then in the egg, then in the bread crumbs.\nTest the oil with a pinch of bread crumbs. If they immediately sizzle, carefully slide the chops into the hot oil, working in batches if necessary to avoid overcrowding the pan and bringing down the temperature of the oil. Cook for about 5 minutes per side, or until golden brown. Remove the cooked chops from the oil and let drain on the lined sheet pan. Season lightly with salt.\nToast the bread until golden brown.\nAssemble the sandwiches and serve with the chili paste alongside.\n","date":"29 December 1991","externalUrl":null,"permalink":"/posts/macau-style-pork/","section":"Posts","summary":"","title":"Macau-style Pork Chop Sandwich","type":"posts"},{"content":"","date":"29 December 1991","externalUrl":null,"permalink":"/posts/om/","section":"Posts","summary":"","title":"Omelette Du Fromage","type":"posts"},{"content":"","date":"29 December 1991","externalUrl":null,"permalink":"/posts/rack-of-lamb-with/","section":"Posts","summary":"","title":"Roast Lamb with Mint Sauce","type":"posts"},{"content":" ","date":"29 December 1991","externalUrl":null,"permalink":"/posts/sourdough/","section":"Posts","summary":"","title":"Sourdough","type":"posts"},{"content":"","date":"29 December 1991","externalUrl":null,"permalink":"/posts/vanocni-cukrovi-czech-susenky-christmas-cookies/","section":"Posts","summary":"","title":"Vanocni Cukrovi (Czech Susenky Christmas Cookies)","type":"posts"},{"content":"","date":"29 December 1991","externalUrl":null,"permalink":"/posts/cha-shao-chinese-bbq-pork/","section":"Posts","summary":"","title":"叉烧 - Chinese BBQ Pork","type":"posts"},{"content":"","date":"29 November 1991","externalUrl":null,"permalink":"/posts/bon-appetit-perfect-roast-turkey/","section":"Posts","summary":"","title":"Bon Appetit Perfect Roast Turkey","type":"posts"},{"content":"","date":"29 November 1991","externalUrl":null,"permalink":"/posts/ivan-orkin-shio-ramen/","section":"Posts","summary":"","title":"Ivan Orkin Shio Ramen","type":"posts"},{"content":"","date":"29 November 1991","externalUrl":null,"permalink":"/categories/ramen/","section":"Categories","summary":"","title":"Ramen","type":"categories"},{"content":"","date":"29 November 1991","externalUrl":null,"permalink":"/tags/ramen/","section":"Tags","summary":"","title":"Ramen","type":"tags"},{"content":"","date":"29 November 1991","externalUrl":null,"permalink":"/posts/shao-rou-roast-pork-belly/","section":"Posts","summary":"","title":"燒肉 - Roast Pork Belly","type":"posts"},{"content":"","date":"29 October 1991","externalUrl":null,"permalink":"/posts/basque-burnt-cheesecake/","section":"Posts","summary":"","title":"Basque Burnt Cheesecake","type":"posts"},{"content":"","date":"29 October 1991","externalUrl":null,"permalink":"/posts/almond-cookies/","section":"Posts","summary":"","title":"Chinese Almond Cookies","type":"posts"},{"content":"","date":"29 October 1991","externalUrl":null,"permalink":"/posts/kale-and-almond-pesto-pasta/","section":"Posts","summary":"","title":"Kale and Almond Pesto Pasta","type":"posts"},{"content":"","date":"29 October 1991","externalUrl":null,"permalink":"/posts/okonomiyaki/","section":"Posts","summary":"","title":"Okonomiyaki","type":"posts"},{"content":"","date":"29 October 1991","externalUrl":null,"permalink":"/posts/rosemary-cashew-focaccia/","section":"Posts","summary":"","title":"Rosemary Cashew Focaccia","type":"posts"},{"content":"","date":"29 January 1991","externalUrl":null,"permalink":"/posts/squash-rosettes/","section":"Posts","summary":"","title":"Squash Rosettes with Carrot Puree and Truffle Aioli","type":"posts"},{"content":"","externalUrl":null,"permalink":"/authors/","section":"Authors","summary":"","title":"Authors","type":"authors"},{"content":"","externalUrl":null,"permalink":"/series/","section":"Series","summary":"","title":"Series","type":"series"}]