Libervia / Salut a Toi project is working on a protocol bridge, so that !xmpp based #socnets and #ActivityPub + #HTTP based social networks can interact.
!loadaverage #HTTP #Error #Code #500 #Internal #Server #Error when posting. #Screenshot included. Interesting, trying to post with attachment (the image) fails with other error: "[File] DB_DataObject error []: MDB2 Error: null value violates not-null constraint" - So the screenshot is missing from this post. #fixme
Does anyone know, if current #browsers and #caches #support #HTTP #header post-check cache-control directive? I don't remember seeing being used widely, even if it's very handy for some resources with long max-age. #cache #control
solr6:
image: alfresco/alfresco-search-services:1.4.2.1
mem_limit: 4g
environment:
#JAVA_OPTS: "-Dsolr.log.level=FINE"
#Solr needs to know how to register itself with Alfresco
- SOLR_ALFRESCO_HOST=alfresco
- SOLR_ALFRESCO_PORT=8080
#Alfresco needs to know how to call solr
- SOLR_SOLR_HOST=solr6
- SOLR_SOLR_PORT=8983
#Create the default alfresco and archive cores
- SOLR_CREATE_ALFRESCO_DEFAULTS=alfresco,archive
#HTTP by default
- ALFRESCO_SECURE_COMMS=none
```
#Gemini protocol aims to fit between #Gopher and #HTTP ... simple, modern, encrypted, private.
This looks interesting. I like gopher, but I'm bothered by:
- the list of item types (newer file types not on the list)
- lack of TLS
- lack of virtual (sub-) domain hosting
It looks like Gemini aims to fix all those things without becoming a complexity and privacy nightmare like the Web is.
#TodayILearned that the new #Java11#java.net.http #HttpClient and #HttpRequest are not equipped to handle multipart/form-data out of the box. One has to generate and add their own form boundaries.
This is a problem for those of us who never needed to know the #HTTP/1.1 and #Multipart RFCs by heart, and had hoped that the tools built into the programming language's ecosystem would handle common use cases.
Which HTTP component do you use for multipart/form-data?
Then I noticed that this new #HTTP client in #Java11 had HTTP2 turned on by default. It should downstep to HTTP1.1 automatically if the remote server indicates it doesn't speak version 2.
So I explicitely made that client talk HTTP1.1.
To no avail.
But that wasn't apparent immediately. First I thought it helped: the 1st response body after switching wasn't GZipped.
I was sure I wasn't asking for GZipped response bodies. So I looked up the #HTTP RFC once more for the #AcceptEncoding#requestHeader. It lets us set the kinds of encoding we like (compress, deflate, etc.) and also set a quality indicator.
If set to 0 for an encoding, we tell the remote server not to supply it.
Yay!
Or not: the remote REST API server I was addressing responded, saying they didn't like my Accept-Encoding header.
I looked up the HTTP RFC: yes, it is called Content-Length, and yes, it's a count of 16-bit characters. And it can be used only if neither streaming nor chunking the contents. My code met those requirements.
@ng0@mmn@cwebber my first thought is that web servers are for serving documents. Since most folks are obviously not of the opinion that should be the extent of the tech (and we have #gopher for that anyhow), we should probably drastically change #HTTP.
@strypey One problem is that #HTTP is incomplete. You should be able to edit pages in your browser and click on 'Save', regardless of where on the Web your files are. We're stuck with kludges that got the first mover advantage. #WebDAV