Authentication (back end):
The main thing I struggled with on the backend was getting a clean mental model for “what is public vs what
requires login vs what requires ownership.” Reading cookies and managing sessions was straightforward once it
worked, but the tricky part was making sure every write endpoint actually enforced the rules server-side
(because the UI can always be bypassed). I ended up making a simple in-memory session store keyed by a random
token, then adding middleware that checks the token cookie and attaches the user to the request.
Authentication (front end):
On the frontend, the hardest part was state and conditional rendering not spiraling out of control. I had to
keep track of whether I’m logged in, fetch /api/auth/me on page load, and then decide what the UI should even
show based on that. Once I had a “me” object in state and made Axios send cookies, it
became a lot easier to reason about.
Deployment:
Deployment was very annoying. On the server I had to install basic stuff (npm/node tooling,
git, sqlite3), then deal with path differences once everything is compiled into dist. Relative paths that worked
locally broke when the working directory changed, so I had to make sure the
database path and static asset paths still resolved correctly when running the compiled server. On top of that,
DNS setup in Porkbun tripped me up. The hw4 subdomain didn’t resolve at first, then it resolved on public DNS
but not locally because my machine/network cached the old state.
Security audit (XSS):
I don’t think my app is meaningfully vulnerable to stored XSS in the normal use case because I’m not rendering
user input as raw HTML anywhere. The frontend is React, which escapes text by default when you render it
normally (I never used dangerouslySetInnerHTML), so if someone enters something like
<script>...</script> it should be displayed as text, not executed. On the backend I also validate
inputs with Zod (type/format checks), and the auth token is stored in an HttpOnly cookie, which helps limit
damage even if someone somehow got script execution.
Security audit (CSRF):
My mitigation is mainly (1) I don’t
use GET requests for state-changing actions (writes are POST/PUT/DELETE), and (2) the auth cookie is set with
SameSite=Lax, which reduces the chance the browser will attach the cookie on cross-site requests. For a class
project this is a reasonable baseline, and I’m not exposing any “click this link and
it deletes your account” style GET endpoints that may be exploitable.
Rate limiting:
I added rate limiting at the application level using the express-rate-limit middleware. I configured a stricter
limiter on the login/register endpoints to make brute forcing harder, and a more general limiter on /api routes
to prevent spammy request bursts. When I tested it by hammering the login endpoint, it started returning 429
after the limit, which is exactly what we want for basic brute-force protection.
HTTP headers:
I used Helmet to set security-related HTTP headers automatically. The main benefit is it applies a set of
recommended defaults that reduce common attacks (for example, reducing information leakage
like X-Powered-By, adding protections around how the browser treats content types, and tightening cross-origin
behaviors). These headers help the browser enforce safer defaults, which means even if the app has a
mistake, the browser is less likely to do something insecure “by default.”
Anything else I did for security:
Besides rate limiting and Helmet, the biggest security-related choice was keeping the session token in an
HttpOnly cookie and enforcing authorization on the backend. HttpOnly reduces the chance of token theft via
client-side JavaScript, and server-side authorization checks make sure users can’t just bypass the UI with curl
and edit/delete other people’s books. I also kept request bodies small with a JSON size limit and validated
inputs with Zod, which helps reduce weird edge cases and makes the API harder to abuse.
It was pretty easy to reuse the POST logic for the PUT edit endpoint. Most of the patterns were already set up from HW1/HW2 (zod validation, parsing IDs, error handling, returning JSON), so I mostly just followed the same structure. The biggest changes were swapping the SQL from INSERT to UPDATE and making sure I handled the “book not found” case properly (since updating a missing row shouldn’t silently succeed). I also had to think a bit more about what the response should look like after an update so the front end could refresh cleanly.
Writing the tests was also pretty straightforward because the pattern for endpoint tests already existed. Once I had the earlier tests working, the edit/delete tests were mostly just more of the same: set up known data, make the request, check the status code and response body, and verify the database change by doing a follow-up GET.
I integrated edit/delete directly into the books table by adding an Actions column with Edit and Delete buttons per row. For editing, I used a dialog-based flow instead of inline editing. The dialog approach felt cleaner to me because it gives you a focused place to edit the fields, validate them, and show errors without trying to manage partial edits inside table cells. For deletion, I used a confirmation dialog because deleting is destructive and it’s easy to misclick.
The hardest part was the state management. There are a lot of moving pieces: which book is currently being edited, the draft values for each input field, when to clear errors/success messages, and when to refresh the table. Keeping the “currently editing book” state correct (and making sure the dialog opens with the right pre-filled values, then resets properly after save/cancel) was the trickiest part. Once that clicked, the rest of the UI logic was much easier.
Material UI’s API felt pretty verbose at first, mostly because you end up writing a lot of components (Stack, Box, Typography, Dialog, etc.) where I previously had simple HTML tags. But after using it for a bit, it felt very consistent, and the look and feel improved immediately. The defaults look clean and professional, and components like Dialog and Alert made it much easier to build a nice UI.
At first it was fairly difficult because I was unfamiliar with the library and had to learn the right components and props to get the layout and spacing to behave the way I wanted. I spent some time going back and forth between the docs and my code to figure out what patterns were “normal” in MUI. But it got easier as I went, and once I understood the basic building blocks (Stack for layout, TextField for inputs, Dialog for modals, Button variants/colors), refactoring the rest of the page became pretty mechanical.
The main thing I changed on the backend side was making all the routes start with /api so the Vite
dev server can proxy requests cleanly. That ended up being really nice because my frontend code could just call
/api/books and /api/authors without hardcoding ports or dealing with CORS stuff. While
building the UI, I also realized I cared a lot more about consistent response shapes and error messages than I
did when I was only writing the backend. When you’re building a frontend, it’s immediately obvious if your API
errors are vague, because you’re the one who has to display them. So I leaned into letting the server return
descriptive Zod errors and then just showing those in the UI.
For validation, I mostly used server-side validation, especially for the forms. I relied on the backend’s Zod schemas to reject bad input and then displayed the returned error messages. The upside is it keeps validation logic in one place and guarantees the rules are the “real” rules. The downside is the user only finds out after hitting submit. The one place I did add a bit of client-side validation was the year search input, just because it’s super simple to check “4 digits” and it avoids sending obviously bad requests. But even there, if the server ever rejects something, the UI still needs to handle it.
React itself was honestly fine, but I definitely had to lookup the patterns. The big thing is that React feels like you’re not “doing things” directly, you’re just updating state, and then React re-renders for you. Once I got back into the mindset of “UI is a function of state,” it became pretty straightforward to build the page by keeping small pieces of state for each input field, plus separate state for success/errors/loading.
Compared to plain DOM manipulation, React feels way less annoying once you’re past the initial learning curve. With DOM code, you’re constantly selecting elements, creating elements, attaching event listeners, and manually updating the page. With React, I basically described what the UI should look like and then used state updates to drive changes. I prefer React for anything that has forms + tables + dynamic updates, because the “automatic rerender” model is just cleaner than manually keeping DOM updates in sync.
TypeScript on the frontend was mostly helpful, but also a little tedious sometimes. It helped a lot when dealing
with API responses, because I could write things like axios.get<{ books: Book[] }>(...) and
then have autocomplete and type checking on what came back. At the same time, it still doesn’t enforce anything
at runtime, so it doesn’t replace validation. The backend still needs to validate everything because the client
can always lie.
I did use LLMs for this homework. I used them mostly like a tutor: remembering the hook patterns, controlled inputs, how to structure the POST handlers, and how to handle Axios errors cleanly. I also used them to sanity-check how I was thinking about state and rendering (like what should be a separate state variable vs derived from other state).
LLMs definitely made the assignment more fun and saved time. Same as last time, the biggest difference for me is that I don’t fall into Google rabbit holes anymore. If I have a specific question, I can ask it and then drill down until it makes sense.
I think I spent around 8–10 hours on this assignment. Most of that time went into writing the actual endpoints and getting the logic right. For e.g, deciding what each endpoint should return, how to handle missing IDs vs invalid IDs, and making sure the database operations matched what the API was promising.
The biggest struggle for me was understanding how TypeScript and Zod fit into the picture and where/why I’m supposed to use each one. I get that TypeScript is “types” and Zod is “validation,” but in practice I kept mixing them up and not always knowing what was TypeScript helping me vs what was Zod doing at runtime. I think a couple smaller, focused exercises would help a lot (like: “here’s an endpoint body, validate it with Zod and show what the parsed type becomes,” or “here’s a query param, type it safely without any”). Basically, I wanted a clearer mental model of when you’re supposed to lean on TypeScript vs when you’re supposed to lean on Zod.
TypeScript did help catch some bugs, even if I didn’t always notice it in the moment. For example, it forced me to handle cases where a database lookup could return undefined (like getting an author or book by ID), which prevented me from accidentally returning “success” with an empty response. It also helped when dealing with Express inputs, like params and query strings, because those don’t come in as the types you wish they were. It pushed me to explicitly parse/convert and handle bad inputs, instead of just assuming everything is valid.
That said, I’m still not fully confident that I always know what parts of my code were TypeScript “features” vs Zod “features.” I know Zod is validating incoming JSON bodies at runtime and TypeScript is giving compile-time safety, but sometimes it felt like I was just copying a pattern without fully internalizing why it’s structured that way. I definitely feel more comfortable now than when I started, but I’d still like more practice with the boundaries.
Testing was honestly sped up a lot by using an LLM. I basically defined what cases I wanted (happy path, invalid input, not found, foreign key failure, delete rules, filter behavior) and the LLM generated the test skeletons. I still verified them and edited where needed, but it saved a lot of time on writing repetitive boilerplate and remembering exact Axios/Vitest patterns. It also made me more willing to write more tests because it didn’t feel like a huge chore.
The tests did help me catch a couple real issues. One was around mixing up query params vs path params (I originally leaned toward a route shape that didn’t match REST filtering conventions), and tests made it obvious the behavior didn’t match what I intended. Another was around error handling and status codes, like accidentally returning 200 with an undefined object instead of returning a 404, and realizing delets should return 204 with no body. Having tests made those mistakes show up immediately.
If I were to structure testing differently in the future, I’d still keep the idea of resetting the database before each test, since that made the tests a lot less fragile and easier to reason about. In general i'd say the tests helped me make sure that I wasn't breaking old stuff by adding new code in.
I used LLMs during this assignment, mostly as a tutor. I used them to figure out TypeScript/Zod patterns, like how to define schemas, what safeParse returns, and what those types mean. I also used them to discuss how to interact with SQLite from TypeScript (like db.get vs db.all vs db.run, and what “RETURNING *” does). Basically, whenever I had a “why is this shaped like this?” question, I’d ask the LLM and then drill down until it made sense.
Overall I think LLMs made this assignment way more fun and saved a lot of time. I didn’t get stuck in Google rabbit holes. If I had a specific question, I could ask it and get an answer immediately, and then ask follow-ups until it clicked. It made it easier to stay focused and build momentum instead of getting blocked for hours on one library detail. I also got more comfortable with TypeScript during this assignment because I could keep asking “what does this line do?” and “why do we do it this way?” until it made sense.