To manage and collaborate the API documentations of a large project, we follow the below steps

  1. Write details of each endpoints in a separate yaml files.
  2. Write each models in separate yaml files so that we can reuse it.
  3. Combine all the yaml files using `swagger-cli`
  4. Serve the swagger doc in html format using an express server

You could checkout these example github repository for details.

Write details of each endpoints in a separate yaml files.

In the example we have 2 endpoints `/contact` and `/product`. YAML files are created with below folder structure.

Write each models in separate yaml files so that we can reuse it.

Below image refers the folder structure of defenitions


Scenario

  1. Our application has a different authentication mechanism for login. Google, Facebook, custom login etc.
  2. Customers upload images after login
  3. Storing images in Amazon S3
  4. Generating thumbnails for each images to use in the list view

Traditional way

This approach has some drawbacks

  1. Bandwidth usage of application server — Since we upload the images first to application server, it consumes a major part of bandwidth
  2. CPU usage — Needs comparatively high CPU to process images and generate thumbnail
  3. Third Party software dependence — We should install some image manipulation tools on the server.

To avoid such headaches, we can go for the serverless…


How to find the difference between 2 dates, which are in string format ?

The solution was very simple, after execution of below code snippet we will get the required result in the variable dates

tm start_parsed = {};
strptime("2020-01-21 05:00", "%Y-%m-%d %H:%M", &start_parsed)
tm end_parsed = {};
strptime("2038-12-23 05:00", "%Y-%m-%d %H:%M", &end_parsed)
time_t start = mktime(&start_parsed);
time_t end = mktime(&end_parsed);
int dates = difftime(end, start) / (60 * 60 * 24);

It worked fine, till customer installed the library in a 32 bit ubuntu device. We tested the library in 64 bit ubuntu device, so we couldn’t identify these…


First, you need to download the key json file into your computer from google cloud console.

One simple solution is to export the absolute path of key json file as below

export GOOGLE_APPLICATION_CREDENTIALS=absolute/path/of/key.json

If you are not satisfied with the above solutions, then do it programmatically from java. This is also simple.

  1. add dependencies in gradle file. If you using maven then add the corresponding dependencies in pom.xml
implementation group: 'com.google.api', name: 'gax', version: '1.5.0'
implementation group: 'com.google.cloud', name: 'google-cloud-dialogflow', version: '2.0.0'
implementation group: 'io.grpc', name: 'grpc-core', version: '1.30.0'
implementation group: 'io.grpc', name: 'grpc-api', version: '1.30.0'
implementation group: 'io.grpc',

  1. Checkout the project from github and start using it, if you are familiar with DialogFlow.
  2. Create a GCM Project in google console.

Go to google console dashboard and create a new Project. I created a project with name `Dialogflow-Test`


Let us create a multiple database application (multi tenant application)

Problem : Create an application to manage schools and colleges. Each school/college data should save in separate database. Also students in each years should save in separate collections.

I solved these problem by using `mongoose.connection.useDb` method and using collection_name parameter while creating the model.

Initial mongo setup is same, connect to a mongo db using mongoose.

public mongoUrl: string = 'mongodb://localhost/';mongoose.connect(this.mongoUrl, {   useNewUrlParser: true,});

We do the trick in the model class

In the controller we call the createModel method with 2 parameters college_name as db name…


To communicate between python and C, we have a library `ctypes`

Step — 1 : create a c program, in which one function receives another function pointer as parameter.

In the below example, the function divide receives another function pointer as first parameter, which receives 2 parameters, quotient and remainder.

save the program as `division.c`

#include <stdio.h>void divide(void (*ptr)(int *, int *), int a, int b){  int s = a / b;  int r = a % b;  (*ptr) (&s, &r);}void print_sum(int * s, int * r){ printf("Quotient is %d, remainder is %d\n", *s, *r);
}
int…


If your csv files are very large say more than 100MB, then it would be very difficult to concatenate the csv files using conventional methods.

In Python we can use the library shutil to concatenate multiple large csv files into a single csv file.

Sample code is

import shutil
csv_files = ['source1.csv', 'source2.csv', 'source3.csv', 'source4.csv', 'source5.csv']
target_file_name = 'dest.csv';
shutil.copy(csv_files[0], target_file_name)
with open(target_file_name, 'a') as out_file:
for source_file in csv_files[1:]:
with open(source_file, 'r') as in_file:
# if your csv doesn't contains header, then remove the following line.
in_file.readline()
shutil.copyfileobj(in_file, out_file)
in_file.close()
out_file.close()

To Test the code, download some sample large csv file (eg : https://www.stats.govt.nz/large-datasets/csv-files-for-download/)

Then make some copies of same files and run the above program.


  1. Install VLC player in your android device (I used amazone firetv stick)
  2. Start VLC activity from your android application as follows
Intent i = new Intent(Intent.ACTION_MAIN);
i.setComponent(new ComponentName("org.videolan.vlc", "org.videolan.vlc.gui.video.VideoPlayerActivity"));
String url = 'Your youtube url';
i.putExtra("url", url);
i.setDataAndType(Uri.parse(url), "video/*");
context.startActivity(i);

3. Wait for some time, it will starts play your youtube url in VLC.


simple,

  1. get a json file for testing,

example file : https://github.com/zeMirco/sf-city-lots-json

2. install the module pandas if not installed

pip install pandas

3. run the following python script

python json-to-csv.py

json-to-csv.py file contains

import jsonimport pandas as pdfrom pandas.io.json import json_normalizef = open('citylots.json') # open the json filedata = json.load(f) # load as jsonf.close()df = json_normalize(data['features']) #load json into dataframedf.to_csv('json-to-csv.csv', sep=',', encoding='utf-8') #save as csv

Prince Francis

Providing simple solutions for complex problems.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store