we can put the class in servlet and edit web.xml, no source code need to be changed.
using this, we can delegate pdf generation from servlet to node server.
---
Here's an example excerpt of a web.xml file to communicate to a Solr server:
Puppeteer is a powerful tool to control web application at server side using headless chromium.
This is mainly used for automated testing, but one of the useful feature is generating pdf from html.
One of the main problem of generating PDF from html on client side browser is that the layout may change when the browser is updated(Actually I only considering Chrome/chromium).
So if the PDF generation is done at server side, we can fix the specific version of chromium at server side, but client can update the browser without loosing correct layout of pdf.
This Puppeteer can be installed by 'npm i puppeteer'
Following is the sample code for support httpserver to convert html into pdf.
This service takes parameters, and generate html string from them and create pdf from the generated html string, so no file system is used for html and pdf.(efficient)
There are a few trick in the above code.
1) in order to process link element in the HTML file which refer to the local file, we need to use page.goto(url), before calling page.setContent(html)
2) the root web location must be provided from client.(we may hard code this, but if it is in a war file, we cannot hard code the folder location statically).
3) in case of Java servlet, there is a method getServletContext().getRealPath("/") which provide this info. webRootPath in the above code has this value.
sozu is difficult to build in windows, and a bit old (2018).
if we use this, existing server app running in Glassfish can be moved to Rust based server without affecting the client side application. For instance, db related server side code can be migrated to Rust base server.
extern crate hyper; extern crate futures;
use hyper::server::conn::AddrStream; use hyper::{Body, Request, Server}; use hyper::service::{service_fn, make_service_fn}; use futures::future::{Future};
fn main() {
// This is our socket address... let addr = ([127, 0, 0, 1], 13900).into();
// A `Service` is needed for every connection. let make_svc = make_service_fn(|socket: &AddrStream| { let remote_addr = socket.remote_addr(); service_fn(move |req: Request<body>| { // returns BoxFut println!("path: {}, ip: {}", req.uri().path(), remote_addr.ip()); return hyper_reverse_proxy::call(remote_addr.ip(), "http://127.0.0.1:8080", req) }) });
let server = Server::bind(&addr) .serve(make_svc) .map_err(|e| eprintln!("server error: {}", e));
println!("Running server on {:?}", addr);
// Run this server for... forever! hyper::rt::run(server); }
Parallel matrix computation is an interesting topic to test how much the zero cost abstraction is achieved in Rust.
Also if it can increase the performance, it is practically useful.
This was actually published 2 years ago.
While the code is pretty neat, but it is intersting to implement this with crossbeam so that it can directly apply SPMC channel. basically this is typical situation for load balance server.
In fact, the straight forward modification did not work.
it seems better to use hyper.
there is an interesting github, that is quite close to my goal. but it is not using crossbeam.
So it would be better to use this for multi threaded http server imple with crossbeam.
After trying to combine crossbeam and tokio, I found this is not simple.
In particular the latest approach of Tokio rely on async, and async and thread seems not easy to coexist.
Once process function includes async code, all code must be async.
And thread's parameter closure should not be async, and the free variable are not permitted because of the lifetime of variable.
there have been some discussion over mpsc channel against mpmc like approach, and if we use these Tokio, hyper library, we may not have to use mpmc at least for http related application.
since it already supports multi threaded handlers.
Anyway I will investigate hyper now. (hyper uses tokio)
I tried to write a generic function which converts a list to a receiver channel.
Originally I wrote this function taking Vec, but I wanted to change to Collection type which is sort of 'super' type of Vec, but there is no such type in Rust.
There are interesting research in this direction in Rust.
In Java, there are interfaces such as Collection and Map that are used to identify generic collections--e.g. ArrayList and HashSet implement Collection, and HashMap and TreeMap implement Map.
I'd like having this capability in Rust as well, in case I'm designing a library and want the user to be able to decide which implementation of a collection they'd like to use. However, I noticed Rust doesn't have traits to represent a generic collection. I was wondering if this was an intentional design decision and what the reasoning was.
Following code are re-implementation of a simple Go channel program listed below.
It just reads strings from input channel and convert it to struct data and send it to another channel, then when all the input string channel are processed,start the process of printing the data from the channel. ( this behavior is not so natural, but just a sample code to see how join will occur.)
It now possible to write pretty much similar code in Rust to the corresponding Go channel code using the crossbeam library.
Although this is a simple program, but it requires SPMC channel, the current Rust's std library only supports mpsc channel, it becomes quite complicated if we try to simulate spmc using only mpsc channel. See several related articles found in web.
The last part of Rust book is also describing similar thing in complicated approach.
btw, this Rust code define generic create_receiver function, but Go cannot since it has no generic.
In the end, Go allows to write a code causing data race easily.
While rustc guarantees the code has no data race issues.
In fact, from programming point of view, Rust is higher level programming language for concurrent problems than Go while its run-time performance is better than Go.
println!(">> start info printing."); for info in info_r.iter() { println!("n: {}, s: {}", info.n, info.s); } println!("done."); } fn get_data()-> Vec<String> { let mut v: Vec<String> = Vec::new(); for i in 0..10000 { let s = format!("s{}", i); v.push(s); } return v; }
GO:
package main
import ( "fmt" "strings" "sync" )
type Info struct { p int s string }
func create_ssc() chan string { ssc := make(chan string) go func() { defer close(ssc) for _, s := range data() { ssc <- s } }() return ssc }
I found Rust is not yet officially supporting Multi Producer Multi Consumer channel yet.
It only supports mpsc lib i.e, Multi Producer Single Consumer channel.
This make difficult to write Go style channel program in Rust.
There has been quite intensive research activity to support MPMC recent years. see:
Basically at the begging of 2019, crossbeam became available to fulfill this requirements.
Servo is already using it.
I don't know what happened year 2019 for this library, but the github is still active these days.
I'm looking for a web framework which enable us to develop web application similar to AngularJS.
Although I don't like many aspect of AngularJS, which depends on too much runtime inspection/modification, it makes difficult to reason how it works. It is almost magical voodoo style programming.
But I like the clean separation of view/logic by HTML and model in JS.
While I looked at several web framework on Rust, all of them seems mixing presentation (HTML) in Rust.
Often HTML become very complicated, it is not good idea to include it in Rust programming code.
Of course, the action semantics must be written in Rust, but it should not include more than that.
Yew seems closest to my idea, but it seems relying on Html! macro.
action are mixed in the HTML description.
Rocket seems more server based approach. So some of server side application I may use it, but there will be simpler library for that purpose.( I need to investigate later)
Also the development seems almost stopped 1 year ago, a lot of samples are too old. This is bad sign.
--
So my plan is to investigate other framework which is closer to my ideal.
And I may develop missing part using wasm-bindgen, yew, servo.
For instance,
1) we will write HTML which include directives, and parse it and generate another normal HTML( no special directives) as well as corresponding event handling codes.
nasty part of code generation is it is difficult to synchronize modified code and generated code.
So ideally generated code and hand coded part should be co-existed.
2) in order to implement such code generator, we may use HTML parser html5ever of servo.
3) then we will write a transformer of HTML node into normal HTML node which also generated associated action logic Rust code.
4) It might be simpler to use wasm-bindgen for this part rather than mapping to yew element model.
Since it will duplicate node structure in browser and rust.
Anyway, I need to check these more.
Probably there will be this type of project, just I 'm not aware of.
I rewrote this code in Go lang to see the coding style difference and the runtime performance.
Go channel is mixing the notion of sender/receiver into a single channel. the same channel is used to send the message(data), and receiving the message.
Also termination condition are using special library wg.
I definitely like the clean data race free Rust approach.
For the performance, as expected, Rust is faster than GO. but not so significantly.
But this is a rather simple case, other kind of processes which allocate from heap in Go, there might be more significant difference.
for DIFICULTY is '0000000', GO: 12 sec Rust: 9.7 sec
RUST sample:
Finished release [optimized] target(s) in 0.03s
Running `target\release\mpsc-crypto-mining.exe`
Attempting to find a number, which - while multiplied by 42 and hashed using SHA-256 - will result in a hash ending with 0000000.
Please wait...
Found the solution.
The number is: 50443823.
Result hash: 411e3c717da473d023d6c5aa11d330ffed3fd4c641bd75eafcc779b5e0000000.
real 0m9.661s
-----
for DIFICULTY is '00000004':GO: 1551 sec Rust: 1444 sec
RUST sample:
$ time cargo run --release
Compiling mpsc-crypto-mining v0.1.0 (C:\Users\nnaka\rust_projects\mpsc-crypto-mining)
Finished release [optimized] target(s) in 1.34s
Running `target\release\mpsc-crypto-mining.exe`
Attempting to find a number, which - while multiplied by 42 and hashed using SHA-256 - will result in a hash ending with 00000004.
Please wait...
Found the solution.
The number is: 6829102344.
Result hash: aa60ef885f7d41903661d03a55aca85ae195fdb63bb1b4cbc03e804d00000004.
real 24m4.707s
----
Go code:
Test Machine:
I used my laptop I purchced recently: Lenovo Flex 14 2-in-1 Convertible Laptop, 14 Inch FHD Touchscreen Display, AMD Ryzen 5 3500U Processor, 12GB DDR4 RAM, 256GB NVMe SSD, Windows 10.( i did not know AMD will release Ryzen 4000H, but this may not be so bad buy.)
This test was also useful to know the performance of this CPU. When we run this with 8 threads, it utilize all CPU threads. when we run 4 threads version, it uses mainly 4 threads. and it takes 15 sec vs 12 (for 8 threads). Although actual core number is 4, 8 threads version is faster than 4 threads version. but not twice faster..
And when running full threads, the frequency is boosted to 2.8 GHz. not so fast..
So if we use Threadripper 3990X, this may be done 16X(4/3)=21 times faster.
I wonder which version, i.e, GPU accelerated version vs multicore version will provide faster result.
it iis hard to update node to the latest version in ubuntu, even if we added ppa for v8, it will not reflect to v8 from v6.
The following article worked:
Once the prerequisite packages are installed, you can pull down the nvm installation script from the project’s GitHub page. The version number may be different, but in general, you can download it with curl:
It will install the software into a subdirectory of your home directory at ~/.nvm. It will also add the necessary lines to your ~/.profile file to use the file.
To gain access to the nvm functionality, you’ll need to log out and log back in again, or you can source the ~/.profile file so that your current session knows about the changes:
source ~/.profile
Now that you have nvm installed, you can install isolated Node.js versions.
To find out the versions of Node.js that are available for installation, you can type:
Yew is a modern Rust framework inspired by Elm and React for creating multi-threaded frontend apps with WebAssembly.
The framework supports multi-threading & concurrency out of the box. It uses Web Workers API to spawn actors (agents) in separate threads and uses a local scheduler attached to a thread for concurrent tasks.
-----
this is the framework I was looking for.
Rocket was not good framework. it may be used for server side , service development. but then it should not be called as web framework.
Also its development is stop 1 year ago. and many of library became too old. So we should not use such library.
--
in order to run test, we need to use node greater than v8(v8 is ok). the test use async new syntax.
old version does not support it (without option flag).
v12 is better to use now.