- 26 Dec, 2025 *
For day 2, I think we are good. I liked todays progress. Although it is slower than I would like it to be. It is fine soon we will get good. So today I worked on trying to implement a simple Async I/O with httpx library to fetch multiple URLs at the same time.
19. Remove Nth Node From End of List
- The question itself is easy. However, it was added that I have to do it in one pass.
- I looked at the hint and it said have two pointers where one point moves at the speed of n, where n is the nth node from the end of the list
- I thought I’ll have count_r and coun…
- 26 Dec, 2025 *
For day 2, I think we are good. I liked todays progress. Although it is slower than I would like it to be. It is fine soon we will get good. So today I worked on trying to implement a simple Async I/O with httpx library to fetch multiple URLs at the same time.
19. Remove Nth Node From End of List
- The question itself is easy. However, it was added that I have to do it in one pass.
- I looked at the hint and it said have two pointers where one point moves at the speed of n, where n is the nth node from the end of the list
- I thought I’ll have count_r and count_l variables to take count first and then by the time count_l touches count_r I’ll know to stop. But then I realised that it is dumb because if n is the length of the linked list or anywhere close to that in those cases my idea might not work.
- Okay so I read the hint wrong. It said have it delayed by n and not move it at n. Anyways I understood it now. Second day of realising I’m dumb. It’s fine we will get good soon.
- And as for the way to correct that edge case I talked about in my third point, I could initialise a dummy node, which the solution helped me discover.
- Day 2 and I saw the solution. It is fine. I’ll soon do it myself. I feel fine with this coz I almost found the solution, these are niche things that comes with a specific data structure or algo. I wouldnt mind until I have understood most of the patterns.
Learndocs
Currently this is the code that I wrote for the async fetch. This should be childish but it helped me understand coroutine functions and async await with specific to python. I also realised today that mostly I haven’t properly utilised async await even in Typescript that I am most comfortable with. Well, it’s fine, I am happy that I learnt it today. Now I know, atleast on a basic level how to use concurrency in I/O bound tasks. The synchronous main function took around 12 secs to fetch all of these URLs whereas the async function took around 3secs. Below is a very basic code that I wrote today.
import httpx
import time
import asyncio
import requests
def main():
urls = [
"https://httpbin.org/get",
"https://httpbin.org/delay/2",
"https://jsonplaceholder.typicode.com/posts/1",
"https://quotes.toscrape.com/",
"https://en.wikipedia.org/wiki/Main_Page",
"https://api.github.com/users/realbLanK993",
"https://www.google.com",
"https://httpbin.org/status/404",
"https://books.toscrape.com/",
"https://www.reddit.com/.json",
]
start = time.perf_counter()
for i in range(len(urls)):
r = requests.get(urls[i])
if r.status_code == 200:
print(f"Successfully fetched: {urls[i]}")
else:
print(f"Some error while fetching {urls[i]}\nError Code: {r.status_code}")
end = time.perf_counter()
print("-" * 50)
print(f"Total time taken: {end - start:.2f} seconds")
async def fetch_url(client, url):
print(f"Fetching {url}")
r = await client.get(url)
if r.status_code == 200:
print(f"Successfully fetched {url}")
else:
print(f"Error while trying to fetch {url}\nError Code: {r.status_code}")
async def async_main():
urls = [
"https://httpbin.org/get",
"https://httpbin.org/delay/2",
"https://jsonplaceholder.typicode.com/posts/1",
"https://quotes.toscrape.com/",
"https://en.wikipedia.org/wiki/Main_Page",
"https://api.github.com/users/realbLanK993",
"https://www.google.com",
"https://httpbin.org/status/404",
"https://books.toscrape.com/",
"https://www.reddit.com/.json",
]
start = time.perf_counter()
tasks = []
async with httpx.AsyncClient() as client:
for url in urls:
tasks.append(fetch_url(client, url))
await asyncio.gather(*tasks)
end = time.perf_counter()
print("-" * 50)
print(f"Total time taken: {end - start:.2f} seconds")
asyncio.run(async_main())
# main()
Well, that’s day 2 y’all. As for tomorrow I’ll spend time trying to understand how I could run a CPU heavy task as these I/O bound tasks get completed. See you tomorrow.