Hacker News new | past | comments | ask | show | jobs | submit login
Python Requests 3 (twitter.com/kennethreitz42)
30 points by josuebrunel 6 months ago | hide | past | favorite | 10 comments



What's the historical context here?


https://vorpus.org/blog/why-im-not-collaborating-with-kennet...

It's the first link in the apology page (which in turn is linked to from the tweet).

BTW, Nathaniel J Smith, author of the blog post I've linked to here, is the creator of Trio and all-round awesome open source guy. His other blog posts are well worth reading.


I don’t know exactly what the root causes are (there are a mix of many it could be) but a lot of the maintainers of what seems like _a lot_ of the core python tools seem to be run by utter assholes, completely unable to interact with any part of the community - but especially beginners - without being outright rude and abrasively hostile.

This isn’t limited to python, of course, but (for instance) rust has this reputation as a language but I’ve rarely seen it with popular package maintainers.



Wow. Irresponsibility at its best.


He’s a huge scumbag.


He built the best http library for Python and it’s free to use. That’s gotta alleviate some of peoples’ negative feelings towards him. That is huge value created for the world.


The main value of Requests is that it provided an abstract interface on top of HTTP, which was designed well-enough to become a standard. But today it has fallen way behind in its field, and there are much better alternatives such as HTTPX [0].

[0] https://www.python-httpx.org/


I hope to one day see what I've wanted at times for almost 20 years: something a bit like requests as a low-entry-barrier API but that allows nearly optimal use of the network for high-throughput scenarios too. Learn a few more API features to expand your use case, rather than having to discard it all and try some other completely different approach with completely different gotchas. But something that could be useful in data science scripts etc, not something special and dicey that only works with one carefully tuned application.

A basic problem is that Python asyncio doesn't expose disk/filesystem IO. I think it would be so common that people are scripting an endpoint (rather than some network-to-network proxy) and so would need to easily write async/concurrency scenarios that source or sink from files while efficiently tying into this kind of high-throughput requests layer. It seems a bit much to expect one HTTP client library author to fix this core deficit in the Python platform.

For high performance, you really need to balance disk IO and network IO and also avoid WAN pitfalls like synchronous HTTP/1.1 socket stalls. You need to embrace some kind of async or concurrency paradigm to specify request streams that can be pipelined without false inter-request dependencies. You also need to think about how to expose request or connection-level failures and allow sensible forms of retry within this logical request stream. It's kind of useless if it only supports toy/demo scenarios but cannot sustain high request throughput for hours or days in practice.

Then, you also need some sensible flow-control to avoid stupid results like hundreds of concurrent HTTP/2 streams competing over one socket when it would be better to essentially serialize and pipeline them back-to-back with just a handful of in-flight requests necessary to fully stuff the WAN pipe. You want this limited concurrency for IO optimization and buffer management, to avoid thrashing your end systems. I think it would be better to have this as scheduling policies configured in the core API constructs, with some canned heuristics or self-adaptation. There should be some point of blocking/push-back to pace a naive application, rather than having some API which happily allows the naive application to push things past the breaking point.

And over the WAN, you really need parallel TCP as well for high bandwidth. You can't declare that now with multiplexed HTTP/2 streams on one socket, there is no need for multiple sockets. It's important to have multiple TCP windows rather than one single window that needs to scale too far and be prone to collapse.


> it became apparent that the dependencies I was relying on—or considering for use—simply didn't meet the necessary standards

Basically didn’t he just raise money with promises of features, then go to the underlying libs authors with a “you should do all these features for free, so I can argue why all the money I got for them was with it” while doing shit-all himself?

So, basically he’s rounding off his attempt at fundraising on trying to get someone else to do the actual work for free with a “I’m sorry, but I’m keeping the money and not delivering anything”?

> What was left to do?

> Integration with a low-level HTTP library ready for the task.

You mean literally all the actual work to make the product do anything useful…

No one took issue with requests, just being a thin usability interface on top of other peoples work, because it was free and open source and didn’t raise funds to provide the claimed functionality. But the second you start raising funds and don’t want to spend a single dollar on the people who do all of the real work, your a pos, and deserve all the criticism. The second you take the money but bail on the project because others won’t do the work for free for you, you become a con-man.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: