Commit Graph

105 Commits

Author SHA1 Message Date
Chintan Shah d92490b808 Moved comparison to the top 2019-11-03 20:17:58 -05:00
Chintan Shah a3b76a56c6 Added PyTorch vs TF MAE comparison 2019-11-03 20:14:40 -05:00
Chintan Shah bb32eb0f46 Changed README to reflect PyTorch implementation 2019-10-30 12:30:45 -04:00
Chintan Shah 073f1d4a6e updated epeoch num 2019-10-08 17:32:16 -04:00
Chintan Shah b2d2b21dbd logging MAE 2019-10-08 13:46:34 -04:00
Chintan Shah f720529ac9 demoing with test data 2019-10-08 13:11:43 -04:00
Chintan Shah d2913fd6f1 converting to CPU 2019-10-08 13:09:02 -04:00
Chintan Shah f92e7295a0 added run_demo_pytorch 2019-10-08 13:05:49 -04:00
Chintan Shah dda7013f07 returning predictions from the model during eval at every timestep 2019-10-08 12:56:20 -04:00
Chintan Shah 46b552e075 updated eps value 2019-10-08 02:44:13 -04:00
Chintan Shah 02fb2430f0 removed logging of every horizon 2019-10-08 02:10:59 -04:00
Chintan Shah 765142de00 refactored 2019-10-07 20:55:26 -04:00
Chintan Shah 5d7694e293 logging to 4 decimals 2019-10-07 20:54:58 -04:00
Chintan Shah f6e6713f74 fixed range bug 2019-10-07 20:40:54 -04:00
Chintan Shah 2560e1d954 Added per timestep loss 2019-10-07 20:03:00 -04:00
Chintan Shah 3d93008a3e improved saving and restoring of model 2019-10-07 11:56:14 -04:00
Chintan Shah a5a1063160 fixed docstring 2019-10-07 10:48:48 -04:00
Chintan Shah 5509e9aae5 Ensured all parameters are added to the optimizer 2019-10-07 09:47:38 -04:00
Chintan Shah de42a67391 added logging statement 2019-10-07 07:59:41 -04:00
Chintan Shah 941675d6a7 Added kwargs 2019-10-06 18:57:13 -04:00
Chintan Shah 96d8dc4417 handling nans in loss tensor 2019-10-06 18:55:35 -04:00
Chintan Shah 5dd0f1dd3a implemented masked mae loss, added tensorflow writer, changed % logic 2019-10-06 18:08:13 -04:00
Chintan Shah a8814d5d93 Added docstrings 2019-10-06 17:12:06 -04:00
Chintan Shah ad8ac8ff2f Merge branch 'pytorch_integration' into pytorch_scratch 2019-10-06 17:07:54 -04:00
Chintan Shah 5b93f3c778 Merge branch 'pytorch_integration' of github.com:chnsh/DCRNN into pytorch_integration 2019-10-06 17:02:35 -04:00
Chintan Shah 9fb999c3bb squash! Added dcrnn_cell 2019-10-06 17:01:49 -04:00
Chintan Shah d1964672c2 Added dcrnn_cell
Rough implementation complete - could forward pass it through the network

Ensured sparse mm for readability, logging sparsely as well

moving tensors to GPU

moving tensors to GPU [v2]

moving tensors to GPU [v3]

logging and refactor

logging and refactor

logging and refactor

logging and refactor

logging and refactor

logging and refactor

logging and refactor

ensured row major ordering

fixed log message
2019-10-06 17:00:23 -04:00
Chintan Shah d46b605a65 ensured row major ordering 2019-10-06 15:53:14 -04:00
Chintan Shah 6331173f44 logging and refactor 2019-10-06 15:22:57 -04:00
Chintan Shah ec5d9555a5 logging and refactor 2019-10-06 15:15:11 -04:00
Chintan Shah 55a087ac9f logging and refactor 2019-10-06 14:40:53 -04:00
Chintan Shah 02c4681ad9 logging and refactor 2019-10-06 14:38:44 -04:00
Chintan Shah b6a2b3fe8e logging and refactor 2019-10-06 14:34:58 -04:00
Chintan Shah 036e552bf6 logging and refactor 2019-10-06 14:31:46 -04:00
Chintan Shah 31acadedce logging and refactor 2019-10-06 14:29:28 -04:00
Chintan Shah e563e1bf37 moving tensors to GPU [v3] 2019-10-06 14:13:02 -04:00
Chintan Shah 017ec70783 moving tensors to GPU [v2] 2019-10-06 14:10:20 -04:00
Chintan Shah ba304e9f04 moving tensors to GPU 2019-10-06 14:00:54 -04:00
Chintan Shah 9454fd91a2 Ensured sparse mm for readability, logging sparsely as well 2019-10-06 13:44:55 -04:00
Chintan Shah 2e1836df40 Rough implementation complete - could forward pass it through the network 2019-10-06 13:24:37 -04:00
Chintan Shah b65df994e4 Added dcrnn_cell 2019-10-06 11:55:02 -04:00
Chintan Shah e80c47390d Merge branch 'pytorch_implementation' into pytorch_scratch 2019-10-06 11:49:49 -04:00
Chintan Shah 5a790d5586 cuda no grad 2019-10-04 23:30:10 -04:00
Chintan Shah 593e3db1bf Using model.cuda() if cuda is available 2019-10-04 22:45:08 -04:00
Chintan Shah 8d3b1d0d66 Implemented lr annealing schedule 2019-10-04 21:18:05 -04:00
Chintan Shah ba880b8230 Implementing load and save models and early stopping 2019-10-04 17:25:03 -04:00
Chintan Shah d9f41172dc Implemented eval and function 2019-10-04 17:07:38 -04:00
Chintan Shah 20c6aa5862 Fixed bugs with refactoring 2019-10-04 16:05:52 -04:00
Chintan Shah 2b8d5e6b31 Refactored code and moved everything into a DCRNN forward pass 2019-10-04 13:02:50 -04:00
Chintan Shah f41dc442b0 Implemented gradient clipping and returning output from training one batch 2019-10-03 19:35:54 -04:00